Game over! Computer wins series against Go champion
Source: AFP
A Google-developed computer programme took an unassailable 3-0 lead in its match-up with a South Korean Go grandmaster on Saturday -- marking a major breakthrough for a new style of "intuitive" artificial intelligence (AI).
The programme, AlphaGo, secured victory in the five-match series with its third consecutive win over Lee Se-Dol -- one the ancient game's greatest modern players with 18 international titles to his name.
Lee, who has topped the world ranking for much of the past decade and confidently predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash defeat in the two remaining games on Sunday and Tuesday.
"AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability," said one of Lee's former coaches, Kwon Kap-Yong. "The machine was increasingly gaining the upper hand as the series progressed," Kwon said.
Read more: http://news.yahoo.com/computer-wins-3-0-series-victory-over-korean-082146733.html;_ylt=AwrXgiJN0.NWDDoADMzQtDMD;_ylu=X3oDMTByb2lvbXVuBGNvbG8DZ3ExBHBvcwMxBHZ0aWQDBHNlYwNzcg--
SEOUL, SOUTH KOREA At first, Fan Hui thought the move was rather odd. But then he saw its beauty.
Its not a human move. Ive never seen a human play this move, he says. So beautiful. Its a word he keeps repeating. Beautiful. Beautiful. Beautiful.
The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the worlds top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.
Thats a very strange move, said one of the matchs English language commentators, who is himself a very talented Go player. Then the other chuckled and said: I thought it was a mistake. But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. He had to go wash his face or somethingjust to recover, said the first commentator.
Even after Lee Sedol returned to the table, he didnt quite know what to do, spending nearly 15 minutes considering his next play. AlphaGos move didnt seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area. AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular timea surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years. The commentators couldnt even begin to evaluate the merits of the move.
http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/
PoliticAverse
(26,366 posts)Bernardo de La Paz
(49,001 posts)phantom power
(25,966 posts)I don't think people should be too upset by this, any more than we're upset that we can't run as fast as a race car. It's a machine, and it can explore the game space millions of times faster than any human. If it hadn't won this time, they could just double the number of CPUs until it finally happened.
Bernardo de La Paz
(49,001 posts)The combinatorial explosion of possible moves makes chess look like checkers in comparison.
Go is on a 19 x 19 board, not 8x8. The games are much longer with many more moves. There are fewer obvious capture branches of play that can be pruned quickly.
The reason that AlphaGo does as well as it does is because it avoids simply throwing CPUs at the problem, though a lot of CPUs are used. What it does is learn intuitively, learning by patterns. This way the initial selection of moves to try at each branch node is smaller and more likely to have power.
If you are testing 20 moves (out of 361 possible at the start and out of 180 midgame), and you look ten moves deep (which is not very deep), then you are evaluating 20 raised to the power of 10 positions which is 10 ^ 13 positions or 10,000,000,000,000 positions (10 trillion).
If you can test 8 moves per branch then you evaluate 8 ^ 10 = 1,000,000,000 positions; only 1 billion. So in practice you can look deeper.
But it gets even better when you intuitively prune branches earlier. This is what humans do. They evaluate fewer moves per branch, prune most branches very early and follow the best branches deeper.
phantom power
(25,966 posts)But it's still all about combining good pruning heuristics with a pantload of CPU power. Searching the game space scales out easily, too. So throwing more horsepower at it is a matter of adding commodity hardware.
My point is, it was totally inevitable. It was a matter of when, and how, not if. If the human won this time, they can buy a thousand more cheap rack servers. And another thousand, and another...
And sure, they can also make their learning models smarter and smarter. I think it is cool how they learned new things about the game while they worked on its pruning heuristics.