Today's Reading

Past Polanyi's Paradox

A scientific paper published the very next month—January 2016—unveiled a Go-playing computer that wasn't being foiled anymore. A team at Google DeepMind, a London-based company specializing in machine learning (a branch of artificial intelligence we'll discuss more in Chapter 3), published "Mastering the Game of Go with Deep Neural Networks and Tree Search," and the prestigious journal Nature made it the cover story. The article described AlphaGo, a Go-playing application that had found a way around Polanyi's Paradox.

The humans who built AlphaGo didn't try to program it with superior Go strategies and heuristics. Instead, they created a system that could learn them on its own. It did this by studying lots of board positions in lots of games. AlphaGo was built to discern the subtle patterns present in large amounts of data, and to link actions (like playing a stone in a particular spot on the board) to outcomes (like winning a game of Go).

The software was given access to 30 million board positions from an online repository of games and essentially told, "Use these to figure out how to win." AlphaGo also played many games against itself, generating another 30 million positions, which it then analyzed. The system did conduct simulations during games, but only highly focused ones; it used the learning accumulated from studying millions of positions to simulate only those moves it thought most likely to lead to victory.

Work on AlphaGo began in 2014. By October of 2015, it was ready for a test. In secret, AlphaGo played a five-game match against Fan Hui, who was then the European Go champion. The machine won 50.

A computer Go victory at this level of competition was completely unanticipated and shook the artificial intelligence community. Virtually all analysts and commentators called AlphaGo's achievement a breakthrough. Debates did spring up, however, about its magnitude. As the neuroscientist Gary Marcus pointed out, "Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn't be fair to say that it had mastered the game."

The DeepMind team evidently thought this was a fair point, because they challenged Lee Sedol to a five-game match to be played in Seoul, South Korea, in March of 2016. Sedol was regarded by many as the best human Go player on the planet, and one of the best in living memory. His style was described as "intuitive, unpredictable, creative, intensive, wild, complicated, deep, quick, chaotic"—characteristics that he felt would give him a definitive advantage over any computer. As he put it, "There is a beauty to the game of
Go and I don't think machines understand that beauty.... I believe human intuition is too advanced for AI to have caught up yet." He predicted he would win at least four games out of five, saying, "Looking at the match in October, I think (AlphaGo's) level doesn't match mine."

The games between Sedol and AlphaGo attracted intense interest throughout Korea and other East Asian countries. AlphaGo won the first three games, ensuring itself of victory overall in the best-of-five match. Sedol came back to win the fourth game. His victory gave some observers hope that human cleverness had discerned flaws in a digital opponent, ones that Sedol could continue to exploit. If so, they were not big enough to make a difference in the next game. AlphaGo won again, completing a convincing 41 victory in the match.

Sedol found the competition grueling, and after his defeat he said, "I kind of felt powerless.... I do have extensive experience in terms of playing the game of Go, but there was never a case as this as such that I felt this amount of pressure."

Something new had passed Go.
...

What our readers think...

Contact Us Anytime!

Facebook | Twitter