Artificial intelligence: a new age of digital lifestyle management, or the beginning of the end for mankind? Joe Svetlik investigates
In films, artificial intelligence gets a pretty bad rap. When machines aren’t trying to wipe out humanity (the Terminator franchise) or harvest our bodies for energy (The Matrix), they’re shutting us out of our space ships (2001: A Space Odyssey) or shooting us with machine guns provocatively hidden in their nighties (Austin Powers). And even when they’re not being outwardly aggressive, they’re beguiling us mere mortals with their human-like charms (Her, Ex Machina). There are some exceptions (Star Trek comes to mind), but AI is usually the bad guy. Worrying really, considering it’s about to take over our lives.
AI has been a staple of science fiction for centuries, but Alan Turing – regarded as the father of modern-day computing – was the first to explore its possibilities. In the Fifties, he devised the Turing Test: this stated that if, when posed a series of questions, a computer could convince 30 per cent of its interrogators it was in fact human, it could think for itself, and hence be capable of AI. It was a groundbreaking way of thinking, especially considering it was more than six decades before a computer managed to pass the test.
Since then, researchers have made huge progress, but in fits and starts. “Sometimes there’s a lot of hype around what we do, which raises people’s expectations,” says Toby Walsh, professor of artificial intelligence at the University Of New South Wales. “The problem is when you don’t meet those expectations, there’s always a backlash.”
Next few years
Since 2009, AI has surfed a wave of optimism because of advances made in two main areas: voice translation and speech recognition. “This was because of neural networks, also known as deep learning,” says Dave Coplin, chief envisioning officer at Microsoft UK. “AI is essentially algorithms that try to spot patterns in data. Instead of looking for one pattern, neural networks look for layers of patterns simultaneously – it’s how the human brain works. Voice translation and speech recognition suddenly started working really well because these deep neural networks were so much more accurate.”
In other words, computers could hear and understand. Combine this with computer vision – the ability to identify places, people and activities in photos – and your PC is on its way to becoming a living entity.
Tech companies were quick to pounce. The major players all launched their own phone-based digital butler: Apple came out with Siri, Microsoft had Cortana and Google its Voice Activated Search. They’re a bit gimmicky at the moment, and not all that useful; ask your Android phone ‘What’s the news?’ and instead of reading the headlines, it’ll define the word ‘news’. But soon they’ll know your plans before you do.
They’ll do this by building a profile based on everything they know about you – using data from things such as your search history, movements, texts and emails. They will then predict your actions and serve up any information you might need, like traffic updates for the route to a meeting. Add the ‘Internet Of Things’ – whereby every appliance in your house is connected to the internet and all talk to each other – and we’ll enter the age of ubiquitous computing.
This will add a “layer of ambient intelligence” to our lives, according to Coplin, and will mean you’re never without your digital personal assistant. It will tell you when you’ve run out of milk, what temperature the house is and alert you if someone tries to break in. “This ambient AI will go with us everywhere,” says Murray Shanahan, professor of cognitive robotics at Imperial College London. “It’ll be the same voice, whether we’re talking to our smartphone, vacuum cleaner or our car.” Better hope you get on.
It might sound like something out of a sci-fi film, but these products are already starting to appear. Look at the Nest thermostat, or Apple’s HomeKit platform. It’s entirely possible that within 10 years, we’ll be living in connected homes, possibly having conversations with our vacuum cleaners.
Robo-servant or master?
Which raises the question: What’s the best use for this kind of technology? What can machines do better than us?
To find out, Google researchers fired up some classic Atari video games and pitted its AI software against a human. In Pong, the machine demonstrated a superhuman performance, while in games such as Ms Pac-Man, where you need a strategy, the human player came out on top. Conclusion? “Deep learning is very good at perception tasks, but not at high-level reasoning and planning,” says Walsh. In other words, it’s great at analysing vast amounts of data very quickly, but not so good at thinking ahead.
In a 2011 test, IBM put its Watson supercomputer up against two of the finest players of US gameshow Jeopardy. The clip is on YouTube, and it won’t spoil the fun to tell you Watson absolutely creamed them.
Machines are much better drivers than us, too. Since September last year, Google has been testing 48 autonomous cars on California’s public roads, covering 140,000 miles (that’s about 15 years of typical human driving). Only four have been involved in accidents, all of which were caused by other drivers.
The US Department Of Transportation estimates 94 per cent of all car crashes are caused by human error. As Walsh puts it, “In 20 years, people will be surprised we gave everyone these lethal, human-guided weapons and let them out on the roads.” Feeling inadequate yet? How about this: robots are also gunning for our jobs. According to a paper by Michael A Osborne and Carl Frey from Oxford University, 47 per cent of jobs could be replaced by machines in the next 20 years. The advantages are obvious: they’re cheaper, don’t need sleep or holidays, won’t take a pension and don’t bitch about their colleagues. It’s not just menial jobs: marketers, mathematical technicians, watch repairers and – gulp – journalists are among those at risk. Whether this means mass unemployment and greater inequality, or an abundance of leisure time and a shift to more creative work, we’ll have to see. But one thing is clear: it will be a change on the scale of the industrial revolution.
Judgement Day
Tech luminaries Stephen Hawking, Bill Gates and Tesla CEO Elon Musk have all warned about AI’s potential threat to the human race. So are we in danger of being terminated?
“The AI community is divided on how real the threat is,” says Bart Selman, professor of computer science at Cornell University. “Machines could become very powerful very quickly. Then the question is what kinds of machines will people develop?”
Even with the best will in the world, AI could prove problematic. Tell it to eliminate cancer, for example, and it might wipe out all life on Earth. Technically, it would’ve done what you asked.
No one knows what the future holds. Human-level AI could arrive at any time – estimates vary from 2040 to the end of the century. Once it does, there will be no going back, so it’s crucial we take steps now to avoid any potential problems. “It will be the first time we’ve made machines that can think better than us,” Selman says. “And that will make it much harder to predict what they will do. And, more worryingly, whether we can control them.”