Research highlight

Artificial Intelligence: Navigating the moral rules of the road


October 25, 2018

Global moral preferences for how driverless cars should decide who to spare in unavoidable accidents are reported in a paper published online this week in Nature. The findings, based on almost 40 million decisions collected from participants across the globe, may inform discussions around the future development of socially acceptable AI ethics.

Driverless vehicles will need to navigate not only the road, but also the moral dilemmas posed by unavoidable accidents. Ethical rules will be needed to guide AI systems in these situations; however, if self-driving vehicle usage is to be embraced, it must be determined which ethical rules the public will consider palatable.

Iyad Rahwan and colleagues created the Moral Machine — a large-scale online survey designed to explore the moral preferences of citizens worldwide. The experiment presents participants with unavoidable accident scenarios involving a driverless car on a two-lane road. In each scenario, which imperils various combinations of pedestrians and passengers, the car can either remain on its original course or swerve into the other lane. Participants must decide which course the car should take on the basis of which lives it would spare. The experiment has recorded almost 40 million such decisions.

The authors identify many generally shared moral preferences, including sparing the largest number of lives, prioritizing the young, and valuing humans over animals. They also identify ethics that vary between different cultures. For example, participants from countries in Central and South America, as well as France and its former and current overseas territories, exhibit a strong preference for sparing women and athletic individuals. Participants from countries with greater income inequality are more likely to take social status into account when deciding who to spare.

Before we allow our cars to make ethical decisions, the authors conclude, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.

doi: 10.1038/s41586-018-0637-6

Return to research highlights

PrivacyMark System