Jump to Main ContentJump to Primary Navigation
Top

Does this AI breakthrough herald the beginning of the end?

ai.jpg

The short answer: sort of.

Google's AI specialists have trained their creation, AlphaGo, to beat a human at a game previously thought too difficult for any artificial intelligence to win. 

The London-based team of DeepMind have been building AI systems since 2010, creating "deep neural networks" that can do far more than the average computer. In 2014, they caught the interest of Google, who helped kick their research up a gear.

They took on the challenge of building an AI that could play the ancient Chinese game Go - a tactical board game with thousands of possible permutations (it's a lot more complex than chess). Due to the subtleties of the game, humans have long had an edge over computer players, with no AI player capable of holding their own over a learned human player. 

Go has thus been the 'holy grail' of AI tests ever since computers started beating humans at chess. Facebook recently suggested it had build an AI that was close to beating a human player - but Google's DeepMind AI has somewhat spoilt the party.

In October 2015, DeepMind's AlphaGo AI played five games against reigning European Go champion Fan Hui, one of the greatest Go player of his generation.

AlphaGo won all five games. 

Such was the level of AlphaGo's "skill" that experienced Go players couldn't distinguish between the AI and the human.
 
The method of learning incorporated into AlphaGo's system is key to its win - and signifies a grand development in artificial intelligence.
 
Explained in the journal Nature, the system learnt the rules of Go, followed by 30 million moves from expert players. It was thus able to 'play' Go, but not learn from its game experience: it could only be as good as the moves it had learnt. DeepMind then taught AlphaGo to play itself, learning new moves as it went. 
 
This learning resulted in a 99 per cent success rate against every other Go program it encountered. 
Go

So what? AI isn't going to take over the world via board games.

No, but the hope of DeepMind is that it can use the AlphaGo system to develop a generic, general purpose algorithm for similar AI learning environments.

If they can harness the game-play of Go to other complex real-life scenarios, AI - or robots controlled by AI - could soon start outlearning humans. 

And we all know what that means...

Related

zuckerberg.jpg

These are the insane numbers behind Facebook

gadgets.jpg

These are the greatest inventions of the 21st century

spain.jpg

The locations that Google Earth doesn't want you to see

Comments

More

Leaked poster appears to confirm Destiny 2 release date

And it sounds like they're doing a beta in June

by Matt Tate
23 Mar 2017

Android users can finally play Super Mario Run - and definitely should

Extended toilet breaks for everyone

by Matt Tate
23 Mar 2017

Apple have released a red iPhone and that is very important

And not just because it's red

by Gary Ogden
21 Mar 2017

Amazon's new Alexa update means it can bring you beer in two hours

"Alexa, we're going to need more booze"

by Matt Tate
21 Mar 2017

Forget traffic jams with this SUV that can drive over cars

Why has it taken so long to invent this?

by Dave Fawbert
21 Mar 2017

Zelda megafan controls his smart home with an ocarina

Pointless? Possibly. Are we envious? Definitely

by Matt Tate
20 Mar 2017

Soon you'll be able to steal your mate's phone battery to charge your

"Oh come on, mate, I've only got 4%"

by Gary Ogden
17 Mar 2017

How to cheat your way to victory in 'Mario Kart 64'

Anyone fancy digging the old N64 out again?

by Matt Tate
17 Mar 2017

5 new(ish) mobile games guaranteed to make any commute bearable

It doesn't have to be this painful

by Matt Tate
16 Mar 2017

This terrifying gadget allows you to make private calls in public

Be like Bane

by Gary Ogden
15 Mar 2017