ShortList is supported by you, our amazing readers. When you click through the links on our site and make a purchase we may earn a commission. Learn more

Should Self-Driving Cars Be Allowed To Decide Whether You Live Or Die?

Should Self-Driving Cars Be Allowed To Decide Whether You Live Or Die?

Should Self-Driving Cars Be Allowed To Decide Whether You Live Or Die?

It's been a long, tedious day in the FutureCorp offices. You're checking your emails via your Google Glass 2, a haptic party tapping itself out on your wrist as your Apple Watch Air picks up another message. Your self-driving car is coasting toward a junction when dramatic error messages fill the dash - your breaks have failed, and the AI is about to take evasive action. 

Cameras assess the surrounds while sensors monitor the road surface and vehicles in proximity, split-second calculations contributing to the course of your vehicle. Two eventualities present themselves to the AI: either running into the back of a school bus filled with children, or swerving toward the curb, stopping your car but almost certainly resulting in a fatal collision with a cyclist. 

The obvious choice for the AI? To target the unfortunate cyclist. 

Science fiction?

While it's might sound the stuff of sci-fi, this ethical dilemma is already fuelling interesting discussions between car manufacturers and moral philosophers, sparking interest across the internet. Self-driving cars are coming, and we're going to give them power over who lives and dies. Yet it's not as simple as that - a point that Noel Sharkey, Professor of Artificial Intelligence and Robotics at the University of Sheffield (whom you probably remember as one of the judges of Robot Wars), is keen to point out.

"The bus example is overly simplistic," he tells ShortList.com. "Many accidents involve a lot more complexity of choice: two old ladies in a car, a child on a bike, a pregnant woman carrying a child, a massive truck, a school bus, the Russian ambassador; the possibilities and consequences are infinite. 

"Where reality meets philosophy, the scientific and engineering challenges are enormous and innumerable. The utilitarian perspective requires a near perfect representation of events. But car sensing systems are incapable of the fine-grained discriminations needed to distinguish between children, teenagers, young policemen and grannies. Accidents are not often static. Dynamic events unfold in time, making them difficult, if not impossible, to predict. Cars are unlikely to have the complete information about road surfaces and spillages or the weight and material of other vehicles. The danger only multiplies when there are other self-driving cars involved with different priorities. "

Car manufacturers have been keen to trumpet the advances in self-driving cars, but navigating conversations toward matters of AI's killing off road users is more than a touch pre-emptive. 

"While the achievements so far are extraordinary, there are still some pretty big hurdles to overcome before fully autonomous cars are a realistic proposition," explains Dr Sean B. Holden, senior lecturer in Computer Science at the University of Cambridge. "For example, at a recent talk by the philosopher Margaret Boden she made the observation that we simply don't have a level of AI competency right now to tell that the person standing at the roadside waving their arms is trying to get you to stop."

Cresting the horizon

So is it ever going to happen? Is this thorny philosophical question going to put the brakes on autonomous vehicles? Not likely, says Holden.

"The facts are rather obvious: A) there will be fatalities and accidents, and B) there will be far fewer than there are with human drivers. Consider the situation in aviation: flying is incredibly safe, but it's an inescapable fact that sometimes things go wrong. I believe I'm correct in saying that, in most cases, investigations conclude that pilot error is the predominant cause of accidents."

Self-driving vehicles are going to be a part of our futures then, much in the same way autonomous technology has taken over in many aspects of flying - and that might be for the best.

"Requiring that humans are kept in the decision-making loop is a non-starter: someone who is reading a book is not going to be in a position to react quickly to a potential accident situation," says Holden. "It would therefore be for the greater good to remove humans from the loop; let's face it, many of them are terrible drivers. Even if the law required someone in the driving seat who was not otherwise engaged, I imagine the level of boredom they'd experience would make their reaction to a fast-onset change of conditions pretty sluggish."

Most self-driving car systems currently use a combination of short-range sensors and satellite navigation information to motor their occupants around. In the few occasions that Google's self-driving tech has crashed, the company has stated that its cars were never at fault - it's the fleshy object behind the wheel of other vehicles that's the problem. The sooner that technology is able to take humans out of the equation, or hand the majority of control over to a fast-thinking AI, the fewer accidents there'll be. But again, there's a problem with this utilitarian perspective.

"Being killed or injured accidentally is not the same as being selected as a target by calculations on a computer," says Sharkey. "This is a violation of our fundamental human right to life as specified in Article 2 of the Universal Declaration of Human Rights and the European Charter of Human Rights. Governments have a duty to prevent foreseeable loss of life and should not allow self-driving cars to turn into weapons whenever they enter the scene of an accident."

Drive time

The answers to these issues are yet to emerge from engineering labs, but they're on their way. Elon Musk recently donated $10 million to help ensure AI became a "robust and beneficial" technology, rather than a cold-thinking system that could steer your car off a cliff to theoretically save more lives. 

"It is essential that manufacturers take safety seriously and seek advice from the machine ethics community," says Sharkey. "But the utilitarian option should not be on the table. For now manufacturers need to gain public trust by proceeding cautiously with slow moving self-driving cars that can avoid serious accidents altogether."

But as that trust is built, and as the road fatalities start to (hopefully) fall, you should expect to hear a growing call from sectors of the car industry to unload humans altogether. "I suspect that, as driverless cars inevitably become a reality," explains Holden, "the dead hand of insurance, liability, health and safety, and so on, will make it impossible for people any longer to take control of cars outside of a race track. This is a two-sided sort of progress."

Enjoy that long drive in the country while you can. Its days are numbered.