Artificial Intelligence: Google’s AlphaGo versus Humanity

News that Google’s AlphaGo had beaten the 18-time world champion Lee Se-dol  in the final game of the of five game Go series sent the media into a frenzy. Headlines such as Lee Se-dol “sits down to defend humanity started appearing in the press and with good reason.
AlphaGo, a machine invented by Demis Hassibis’s Deep Mind, had been developed and in the making for many years. It operates by using deep neural networks and machine learning in order to play the game. Running on two basic learning networks, one network learns to predict likely upcoming moves while the other predicts the outcome of different arrangements of game pieces. Additionally, the machine needs to possess a certain amount of intuition in order to play the 2,500 year old game Go, which makes it infinitely more complex than chess.
This brings us to the point that if machines like AlphaGo can intuit moves in a board game then there is no reason why it can’t start to develop and incorporate a human styled response system.

Inventor Demis Hassabis posted this tweet following the groundbreaking victory:

That being said, Lee Se-dol did manage to win one game against AlphaGo in game four, though perhaps the machine was merely taking pity on him. Here you can see the fanfare that erupted over the win.

massive applause for lee se-dol after beating #alphago A video posted by sam byford (@345triangle) on

What scared a lot of people was not that the computer was so brazenly dominant over one of the worlds greatest games geniuses, but the fact this machine had in it’s own time actually sat by itself and played itself millions of times just to work out new ways of winning. What does all this signal? Well perhaps it’s too soon to say. There are rumoured reports of a rematch. Watch this space for Alpha Go’s next move.