Jump to Main ContentJump to Primary Navigation
Top

Does this AI breakthrough herald the beginning of the end?

ai.jpg

The short answer: sort of.

Google's AI specialists have trained their creation, AlphaGo, to beat a human at a game previously thought too difficult for any artificial intelligence to win. 

The London-based team of DeepMind have been building AI systems since 2010, creating "deep neural networks" that can do far more than the average computer. In 2014, they caught the interest of Google, who helped kick their research up a gear.

They took on the challenge of building an AI that could play the ancient Chinese game Go - a tactical board game with thousands of possible permutations (it's a lot more complex than chess). Due to the subtleties of the game, humans have long had an edge over computer players, with no AI player capable of holding their own over a learned human player. 

Go has thus been the 'holy grail' of AI tests ever since computers started beating humans at chess. Facebook recently suggested it had build an AI that was close to beating a human player - but Google's DeepMind AI has somewhat spoilt the party.

In October 2015, DeepMind's AlphaGo AI played five games against reigning European Go champion Fan Hui, one of the greatest Go player of his generation.

AlphaGo won all five games. 

Such was the level of AlphaGo's "skill" that experienced Go players couldn't distinguish between the AI and the human.
 
The method of learning incorporated into AlphaGo's system is key to its win - and signifies a grand development in artificial intelligence.
 
Explained in the journal Nature, the system learnt the rules of Go, followed by 30 million moves from expert players. It was thus able to 'play' Go, but not learn from its game experience: it could only be as good as the moves it had learnt. DeepMind then taught AlphaGo to play itself, learning new moves as it went. 
 
This learning resulted in a 99 per cent success rate against every other Go program it encountered. 
Go

So what? AI isn't going to take over the world via board games.

No, but the hope of DeepMind is that it can use the AlphaGo system to develop a generic, general purpose algorithm for similar AI learning environments.

If they can harness the game-play of Go to other complex real-life scenarios, AI - or robots controlled by AI - could soon start outlearning humans. 

And we all know what that means...

Related

zuckerberg.jpg

These are the insane numbers behind Facebook

gadgets.jpg

These are the greatest inventions of the 21st century

spain.jpg

The locations that Google Earth doesn't want you to see

Comments

More

New app minutiae is the anti-Instagram, and it looks like a lot of fun

A whole new type of social media

by Dave Fawbert
27 Jun 2017

This little easter egg about Mario's appearance is really satisfying

This can't be a coincidence

by Alex Finnis
27 Jun 2017

Here's a look at the 56 new emoji that are headed to your phone

Vampires, wizards, avocados... Harambe?!

by Carl Anka
21 Jun 2017

How well do you remember the 'Crash Bandicoot' games?

Wind back that clock to happier times

by Matt Tate
21 Jun 2017

Dog follows Google Street Mapper, ends up in all the photos

Let this good boy be your guide

by Dave Fawbert
21 Jun 2017

There is a Theresa May fields of wheat game and it's pretty naughty

It's also really addictive

by Matt Tate
21 Jun 2017

10 great E3 games you can play in 2017

So much good stuff, not long to wait

by Matt Tate
16 Jun 2017

Super Mario Odyssey co-op lets second player control Mario's magic cap

Because this game wasn't ridiculous enough already

16 Jun 2017

5 intriguing E3 games you might have missed

Don't let any of these pass you by

by Matt Tate
15 Jun 2017

11 must-see trailers from this year's E3

So much gaming goodness

by Matt Tate
14 Jun 2017