Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

MowCowWhoHow III

(2,103 posts)
Sat Mar 12, 2016, 06:34 AM Mar 2016

Game over! Computer wins series against Go champion

Source: AFP

A Google-developed computer programme took an unassailable 3-0 lead in its match-up with a South Korean Go grandmaster on Saturday -- marking a major breakthrough for a new style of "intuitive" artificial intelligence (AI).

The programme, AlphaGo, secured victory in the five-match series with its third consecutive win over Lee Se-Dol -- one the ancient game's greatest modern players with 18 international titles to his name.

Lee, who has topped the world ranking for much of the past decade and confidently predicted an easy victory when accepting the AlphaGo challenge, now finds himself fighting to avoid a whitewash defeat in the two remaining games on Sunday and Tuesday.

"AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability," said one of Lee's former coaches, Kwon Kap-Yong. "The machine was increasingly gaining the upper hand as the series progressed," Kwon said.

Read more: http://news.yahoo.com/computer-wins-3-0-series-victory-over-korean-082146733.html;_ylt=AwrXgiJN0.NWDDoADMzQtDMD;_ylu=X3oDMTByb2lvbXVuBGNvbG8DZ3ExBHBvcwMxBHZ0aWQDBHNlYwNzcg--



The Sadness and Beauty of Watching Google’s AI Play Go

SEOUL, SOUTH KOREA — At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.

“That’s a very strange move,” said one of the match’s English language commentators, who is himself a very talented Go player. Then the other chuckled and said: “I thought it was a mistake.” But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. “He had to go wash his face or something—just to recover,” said the first commentator.

Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area. AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years. The commentators couldn’t even begin to evaluate the merits of the move.

http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/

8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

phantom power

(25,966 posts)
3. "AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before."
Sat Mar 12, 2016, 11:42 AM
Mar 2016

I don't think people should be too upset by this, any more than we're upset that we can't run as fast as a race car. It's a machine, and it can explore the game space millions of times faster than any human. If it hadn't won this time, they could just double the number of CPUs until it finally happened.

Bernardo de La Paz

(49,001 posts)
5. The thing about Go is that it is not as amenable to throwing CPUs at the problem
Sat Mar 12, 2016, 02:41 PM
Mar 2016

The combinatorial explosion of possible moves makes chess look like checkers in comparison.

Go is on a 19 x 19 board, not 8x8. The games are much longer with many more moves. There are fewer obvious capture branches of play that can be pruned quickly.

The reason that AlphaGo does as well as it does is because it avoids simply throwing CPUs at the problem, though a lot of CPUs are used. What it does is learn intuitively, learning by patterns. This way the initial selection of moves to try at each branch node is smaller and more likely to have power.

If you are testing 20 moves (out of 361 possible at the start and out of 180 midgame), and you look ten moves deep (which is not very deep), then you are evaluating 20 raised to the power of 10 positions which is 10 ^ 13 positions or 10,000,000,000,000 positions (10 trillion).

If you can test 8 moves per branch then you evaluate 8 ^ 10 = 1,000,000,000 positions; only 1 billion. So in practice you can look deeper.

But it gets even better when you intuitively prune branches earlier. This is what humans do. They evaluate fewer moves per branch, prune most branches very early and follow the best branches deeper.

phantom power

(25,966 posts)
6. Sure, it has to prune the search space. A lot.
Sat Mar 12, 2016, 02:47 PM
Mar 2016

But it's still all about combining good pruning heuristics with a pantload of CPU power. Searching the game space scales out easily, too. So throwing more horsepower at it is a matter of adding commodity hardware.

My point is, it was totally inevitable. It was a matter of when, and how, not if. If the human won this time, they can buy a thousand more cheap rack servers. And another thousand, and another...

And sure, they can also make their learning models smarter and smarter. I think it is cool how they learned new things about the game while they worked on its pruning heuristics.


Latest Discussions»Latest Breaking News»Game over! Computer wins ...