ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

Google is developing a 'kill switch' for rogue robots

It's okay guys, they've got a plan

Google is developing a 'kill switch' for rogue robots
06 June 2016

Ever since Dr Stephen Hawking actually warned us that a malicious AI could result in the end of humanity, we've been giving the office printer an extra wide berth. 

Hawking's not alone in fearing a robotic uprising. The British team behind the Google-owned DeepMind AI project are so concerned about the possibility of self-learning machines taking over the world that they're taking steps to add an emergency "stop" button.

The London-based team made headlines earlier this year when their artificial intelligence program beat a human champion at the hugely complex ancient board game, Go.

In addition to teaching robots how to think for themselves, DeepMind are eager to ensure none of their creations take over the planet. In the recently published research paper Safely Interruptible Agents, they outline how AI systems are "unlikely to behave optimally all the time".

Oh good.

They've got a plan though, as the paper explains: "If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation."

The real question is, how do you stop the AI from understanding that there is a 'big red button' and preventing it from working? They're eager to develop a workable system they're describing as "Safe interruptibility", which allows a human supervisor to take control of a rogue AI and convince it it wants to turn itself off. 

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," states the research paper. If you want to lose yourself in a world of pretty complicated programming chat, you can read it all here

In short, the good news is that the brightest minds in the world of AI are hard at work to stop their creations before they start World War III. But we're still not buying an automated vacuum cleaner any time soon.

[Via: Business Insider]