New research investigates how people think autonomous vehicles should handle moral dilemmas. Here people go for an autonomous taxi that…
Andreas Arnold / Bloomberg via Getty Images
In the not too distant future, completely autonomous vehicles will drive our streets. These cars will need to make split-second decisions to prevent people’s lives deteriorating – inside and outside the vehicle.
To determine the attitudes towards these decisions, a group of researchers created a variation of the classical philosophical exercise called “the trellis problem.” They formed a series of moral dilemmas involving a self-propelled car with brakes that suddenly gives out: Should the car swing to avoid a group of pedestrians and kill the driver? Or should the dead people walk on foot, but save the driver? It does not matter if pedestrians are men or women? Children or the elderly? Doctors or bank robbers?
To ask these questions to a large number of people, researchers built a site called Moral Machine, where anyone could click through the scenarios and say what the car would do. “Help us learn how we make machines moral”, says a video on the site.
The cruel game went viral, several times over.
“Really beyond our wildest expectations,” said Professor of Iyad Rahwan Media Arts and Sciences at MIT Media Lab, one of the researchers. “At one point we got 300 decisions per second.”
What the researchers found were a series of almost universal preferences, no matter where someone took the quiz. Overall, people believed the moral cause of the car to do where to avoid young ones over the old freestanding people over animals and save the lives of many people over the few. Their results, led by MIT’s Edmond Awad, were published on Wednesday in the journal Nature.
Using geolocation, researchers found that the 130 countries with more than 100 respondents could be grouped into three groups that showed similar moral preferences. Here they found some variation.
For example, the preference for frugal younger people over the elderly was much stronger in the southern cluster (which includes Latin America, France, Hungary and the Czech Republic) than it was in the East cluster (covering many Asian and Middle East countries). And the preference for sparing people over pets was weaker in the Southern cluster than in Eastern or Western clusters (the latter include, for example, the United States, Canada, Kenya, and much of Europe).
And they found that those variations seemed to correlate with other observed cultural differences. Respondents from collectivist cultures, which “emphasize respect for older members of society,” showed a weaker preference for sparse younger people.
Rawhan stressed that the results of the study should be used with extreme caution and should not be considered the last word about social preferences – especially as respondents were not a representative selection. (Although researchers perform statistical correction for demographic distortions, the answers weigh to match a country’s demographics.)
What does this add? Papers author claims that if we should let these vehicles on our streets, their operating systems should take into account moral preferences. “Before we allow our cars to make ethical decisions, we need a global conversation to express our preferences to those companies that will design moral algorithms and to the decision makers who will regulate them” they write.
But let’s just say for a moment that a society has general moral preferences in these scenarios. Should car manufacturers or regulators actually take into account them?
Last year, the German Ethics Commission for Automated Driving created initial guidelines for automated vehicles. One of their most important dictates? A ban on such decision-making of a car’s operating system.
“In the event of unavoidable accidents, any distinction between individuals may be based on personal characteristics (age, gender, physical or mental constitution) is strictly prohibited, the report says.” General Programming to reduce personal injury can be justified. The parties involved in mobility risks may not sacrifice unrelated parties. “
But to Daniel Sperling, founding director of the Institute of Transportation Studies at the University of California-Davis and author of a book on Autonomous and Shared Vehicles, these are moral dilemmas far from the most pressing questions about these cars.
“The main problem is to make them safe only,” he says to NPR. “They will be much safer than human drivers: they do not drink, they do not smoke , they do not sleep, they are not distracted. “So then the question is: How sure must they be before we let them on our ways?