Who will your autonomous car decide to kill in an emergency?
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

Photo Credit: BMW PR

The development of autonomous vehicles raises ethical questions, and an algorithm will need to provide an answer. What happens when a car must decide whom to hit in various safety scenarios?

In The Walking Dead by Telltale Games, there is a scene in which the character is in a pharmacy surrounded by zombies and the player can save only one of two people they are with. Both are people we know slightly, one a man, the other a woman. No matter what you decide, your heart and stomach constrict at the difficult decision. These are the mechanics for all the games by this successful company: choices with their horrifying results.

At the end of each episode, we get a screen showing statistics on how many people made one choice or the other. This all sounds very theoretical, but soon perhaps, we will have to translate these theoretical decisions into algorithms that become the guiding principles directing our autonomous car.

The field of autonomous cars is sizzling. We all know that, and every day another big player enters the competition to create a fully autonomous vehicle. Business partnerships are developing between car makers, technology giants and startups, and ride-sharing companies are also quite interested in the field. BMW, Intel and Mobileye have joined together to use their unique capabilities (car production, powerful computer chips, computer vision and machine learning) to create a fully autonomous car. According to forecasts, autonomous cars will reach the market in a big way sometime between 2020-2021, and some estimating that as many as 10 million such vehicles will be on the road in 2020. Of course, there are many obstacles and barriers along the way in areas like social, psychological, legal, infrastructure preparation and more. But one of the most difficult issues that companies, transport organizations and governments refuse to address decisively is the difficult ethical dilemmas.

Whose life is worth more, the driver’s or the pedestrian’s?

Let us imagine various scenarios for life-threatening situations in which a fully autonomous car is forced to make a fateful decision. We will start with a car that has a problem or encounters an unexpected object or obstacle and is forced to veer sharply off course. In certain cases, it may not be possible to avoid an accident and the car will have a number of options: Should it protect the driver but hit pedestrians or other cars? (A car will be better protected than a pedestrian but the collateral damage could be worse.) Should it hit a single person on one side or a group of people? (Is there an absolute answer here that we must always choose to kill only one person?) Should it hit an elderly person or a child or baby? Of course, there are easier questions: The choice between hitting a parked car with no people in it or a person is very clear. But sometimes the choice becomes more complex, when a decision needs to be made between two people.

Consider something else: In the future, autonomous cars will be connected with infotainment systems. Think of your cellphone but as encompassing your car. The autonomous car will give the driver a lot of time to engage in hobbies or to work, offering access to the internet, social media, eCommerce sites, stores in the area (tailored to the driver’s hobbies on the basis of the places they drive to), and more. This means that the cars will have a lot of information about the driver and perhaps even other drivers in the vicinity. In extreme situations such as those we have described, will it be better to hit a driver or a pedestrian who is a criminal/has committed many traffic violations? Will we hit a “regular” person or a public figure/famous scientist? A person with many health problems or an athlete? There are endless ethical questions.

All is not lost: MIT and other research institutions offer solutions

Here is where the Massachusetts Institute of Technology enters the picture, offering a moral machine that allows you to decide what to do in the kind of scenarios described above. You can also consult with others and discuss this with them. MIT’s goal is to create a massive database that will collect decisions on moral dilemmas such as those described. Today this is an academic project, but such projects will apparently serve as a basis for future algorithms.

On the site you can play God and decide who will live and who will die. At the end of the task, you can see statistics on how many people and animals were killed as a result of your decisions. You even can create your own scenario using the Design option.

The “moral machine” was developed through joint research by MIT, the Culture and Morality Lab of the University of California, Irvine, and the Toulouse School of Economics. They argue that although there is no such thing as right or wrong in such cases, public opinion will play a major role. To examine what it is, they presented ethical dilemmas to people working for Amazon’s Mechanical Turk, a service that connects businesses with freelancers who carry out tasks that people can perform better than computers. The goal was to find out what the common ways of thinking are. Here, too, various scenarios were presented and the result was a bit expected but troubling: People preferred to program autonomous cars to minimize the number of fatalities. However, they did not wish to purchase “utilitarian” autonomous cars (those that contribute to the general welfare) that give priority to the safety of those around them over the safety of the passengers in the car. In other words, “utilitarian” autonomous cars are a marvelous thing as long as other people are driving them.

To complicate matters even further, in the future, manufacturers of autonomous cars may offer passengers more than one “ethical style” (like the settings in any software or app). Who would then be responsible in the event of an accident? The driver who made such a choice in advance, or the car manufacturer?

Of course, it is important to remember that the goal of autonomous cars is primarily to prevent the enormous number of accidents caused by human error. In the United States alone, nearly 40,000 people a year are killed in accidents, with economic damages of over $1 trillion. In other words, the human being is the bug in the machine. Tesla CEO Elon Musk has stated that in the future, human beings will be banned from driving cars because “you can’t have a person driving a two-ton death machine.”

Public opinion will play a crucial role here. If fewer people purchase autonomous cars because they are programmed to protect those around the car and to sacrifice the people inside it, then the current situation — an enormous number of accidents with regular cars and human drivers — will continue.

Without orderly legislation, what will determine the future of autonomous cars is the laws of the market and consumer preference, and of course lobbyists working for the car manufacturers and tech giants. In any case, it is clear that the time has come to do serious work on these issues, and the sooner the better.

Share on:Share
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

More Goodies From Innovation


What is the future of chatbot development and Artificial Intelligence?

What global smart cities and vision zero initiatives are getting wrong

A Look at How Smart Technologies Are Making Homes More Energy Efficient