The article in the url below reports fascinating news from the world of driverless cars:
"The self-driving car, that cutting-edge creation that's supposed to lead to a world without accidents, is achieving the exact opposite right now: The vehicles have racked up a crash rate double that of those with human drivers."
The problem, of course, is not the driverless cars so much as the irrational and unpredictable humans among which they have to operate:
"The glitch? [The driverless cars] obey the law all the time, as in, without exception. This may sound like the right way to program a robot to drive a car, but good luck trying to merge onto a chaotic, jam-packed highway with traffic flying along well above the speed limit. It tends not to work out well."
While a natural conclusion is that driverless cars will only work properly when there are no longer any humans driving, this creates a more challenging (and ethically interesting) problem that Google and other companies programming the cars have to face today:
"As the accidents have piled up -- all minor scrape-ups for now -- the arguments among programmers at places like Google Inc. and Carnegie Mellon University are heating up: Should they teach the cars how to commit infractions from time to time to stay out of trouble?"
In other words, having improved on the human driver, should programmers now dumb down that machine in order to account for the human drivers with whom they will have to interact for a few more years yet?
"Driverless vehicles have never been at fault, [a study by the University of Michigan's Transportation Research Institute] found: They're usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution."
For now, Google's software engineers are struggling with how best to program the cars to be more imperfect – in other words, to be more human:
"Google has already programmed its cars to behave in more familiar ways, such as inching forward at a four-way stop to signal they're going next. But autonomous models still surprise human drivers with their quick reflexes, coming to an abrupt halt, for example, when they sense a pedestrian near the edge of a sidewalk who might step into traffic."
David Chandler & Bill Werther
Strategic Corporate Social Responsibility: Stakeholders, Globalization, and Sustainable Value Creation (3e)
Instructor Teaching and Student Study Site: http://www.sagepub.com/chandler3e/
Strategic CSR Simulation: http://www.strategiccsrsim.com/
The library of CSR Newsletters are archived at: http://strategiccsr-sage.blogspot.com/
Humans Are Slamming Into Driverless Cars and Exposing a Key Flaw
By Keith Naughton
December 17, 2015