ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

Does this AI breakthrough herald the beginning of the end?

The short answer: sort of.

Does this AI breakthrough herald the beginning of the end?
28 January 2016

The short answer: sort of.

Google's AI specialists have trained their creation, AlphaGo, to beat a human at a game previously thought too difficult for any artificial intelligence to win. 

The London-based team of DeepMind have been building AI systems since 2010, creating "deep neural networks" that can do far more than the average computer. In 2014, they caught the interest of Google, who helped kick their research up a gear.

They took on the challenge of building an AI that could play the ancient Chinese game Go - a tactical board game with thousands of possible permutations (it's a lot more complex than chess). Due to the subtleties of the game, humans have long had an edge over computer players, with no AI player capable of holding their own over a learned human player. 

Go has thus been the 'holy grail' of AI tests ever since computers started beating humans at chess. Facebook recently suggested it had build an AI that was close to beating a human player - but Google's DeepMind AI has somewhat spoilt the party.

In October 2015, DeepMind's AlphaGo AI played five games against reigning European Go champion Fan Hui, one of the greatest Go player of his generation.

AlphaGo won all five games. 

Such was the level of AlphaGo's "skill" that experienced Go players couldn't distinguish between the AI and the human.
 
The method of learning incorporated into AlphaGo's system is key to its win - and signifies a grand development in artificial intelligence.
 
Explained in the journal Nature, the system learnt the rules of Go, followed by 30 million moves from expert players. It was thus able to 'play' Go, but not learn from its game experience: it could only be as good as the moves it had learnt. DeepMind then taught AlphaGo to play itself, learning new moves as it went. 
 
This learning resulted in a 99 per cent success rate against every other Go program it encountered. 

So what? AI isn't going to take over the world via board games.

No, but the hope of DeepMind is that it can use the AlphaGo system to develop a generic, general purpose algorithm for similar AI learning environments.

If they can harness the game-play of Go to other complex real-life scenarios, AI - or robots controlled by AI - could soon start outlearning humans. 

And we all know what that means...