One of the most interesting aspects of autonomous cars, I think, is the ethics surrounding them. Because cars are dangerous, the programmers writing the software to govern autonomous vehicles have to make decisions about life and death. For example, if there is a potential accident and the car has to make a decision between two bad options (e.g., swerving to miss one pedestrian, but inevitably hitting another), what are the values being embedded in the software that will help the car make such decisions when it has to? For example, would it preserve the life of one child if it meant hitting two elderly people? Who gets to decide this stuff? Who should be deciding this stuff? Putting aside the fact that, by default, we have outsourced the responsibility for determining these ethics/values to a small set of multi-national companies (where recent events suggest adult oversight is lacking), it is interesting to consider the range of possibilities. The article in the url below tackles this challenge head-on. It does so by evoking one of the oldest conundrums used in ethics classrooms around the world:
"THE TROLLEY problem used to be an obscure question in philosophical ethics. It runs as follows: a trolley, or a train, is speeding down a track towards a junction. Some moustache-twirling evildoer has tied five people to the track ahead, and another person to the branch line. You are standing next to a lever that controls the junction. Do nothing, and the five people will be killed. Pull the lever, and only one person dies. What is the ethical course of action?"
The relevance of this problem to the autonomous car debate quickly becomes evident:
"The excitement around self-driving cars … has made the problem famous. A truly self-driving car, after all, will have to be given ethical instructions of some sort by its human programmers. That has led to a miniature boom for the world's small band of professional ethicists, who suddenly find themselves in hot demand."
Rather than ask philosophers, however, a group of researchers at MIT decided to scale this project by designing a website ("Moral Machine," http://moralmachine.mit.edu/) that asks the general public to make decisions about who to spare when difficult choices need to be made:
"In one [scenario], for instance, a self-driving car experiences brake failure ahead of a pedestrian crossing. If it carries on in a straight line, a man, a woman and two homeless people of unspecified sex will be run down. If it swerves, the death count will be the same, but the victims will be two women and two male business executives. What should the car do? ... In the end [the website] gathered nearly 40m decisions made by people from 233 countries, territories or statelets."
The results are somewhat predictable (a "person with a pram" was spared the most; "girls" are favored over "boys," etc.), but are no less interesting because of that. The chart in the article (https://cdn.static-economist.com/sites/default/files/images/print-edition/20181027_STC639.png) starkly displays the relative value placed on the lives of different people (and some pets). It is interesting, for example, that a "criminal" is valued over a "cat," but not over a "dog":
"Preferences differed between countries. The preference for saving women, for instance, was stronger in places with higher levels of gender equality. The researchers found that the world's countries clustered into three broad categories, which they dubbed 'Western,' covering North America and the culturally Christian countries of Europe, 'Eastern,' including the Middle East, India and China, and 'Southern,' covering Latin America and many of France's former colonial possessions. Countries in the Eastern cluster, for instance, showed a weaker preference for sparing the young over the elderly, while the preference for humans over animals was less pronounced in Southern nations. Self-driving cars, it seems, may need the ability to download new moralities when they cross national borders."
While the authors of the project are quick to point out that these results should not simply be translated directly into public policy, they do want to make the point that some rules on this might be a good idea:
"Germany is, so far, the only country to have proposed ethical rules for self-driving cars. One of those rules is that discrimination based on age should be forbidden. That seems to conflict with most people's moral preferences."
And, although the study focuses on relatively rare/unusual events, the researchers' point is that no one is thinking seriously about any ethical aspects of the decisions programmers are making every day about the values these machines will be guided by:
"Many people, says Dr Rahwan [one of the study's authors], dismiss the trolley problem as a piece of pointless hypothesising that is vanishingly unlikely to arise in real life. He is unconvinced. The specific situations posed by the website may hardly ever occur, he says. But all sorts of choices made by the firms producing self-driving cars will affect who lives and who dies in indirect, statistical ways. He gives the example of overtaking cyclists: 'If you stay relatively near to the cycle lane, you're increasing the chance of hitting a cyclist, but reducing the chance of hitting another car in the next lane over,' he says. 'Repeat that over hundreds of millions of trips, and you're going to see a skew in the [accident] statistics.'"
Take care
David
David Chandler
© Sage Publications, 2020
Instructor Teaching and Student Study Site: https://study.sagepub.com/chandler5e
Strategic CSR Simulation: http://www.strategiccsrsim.com/
The library of CSR Newsletters are archived at: https://strategiccsr-sage.blogspot.com/
A selection from the trolley
A selection from the trolley
October 27, 2018
The Economist
Late Edition – Final
75