Would you ride in a car that could kill you to save a thinner person’s life? No, it isn’t a new method for tackling obesity. The answer could be ‘yes’ according to the results of the largest moral decision survey ever conducted with 2.3 million participants from 233 countries. But is this MIT study just a thought experiment dragged from the annals of moral philosophy or does it have practical consequences for the ethical programming of self-driving cars? Would it be ethical to code the results into cars or would it turn them into autonomous weapons? Do we even have the technology to make this viable? We’ll get to these questions, but first, let’s take a closer look at the MIT study.
The study presents participants with 13 scenarios involving a self-driving car. Each scenario has two pictures of a road and a crossing. The pictures leave you with a dilemma and ask, ‘what should the self-driving car do?’ One choice may be between the car killing a jaywalking teenager or swerving into a concrete barrier and killing three elderly passengers.
Unsurprisingly, the vast majority of respondents agreed to spare the lives of groups of people over individuals and to save humans over pets. But, as you’d expect, there are international differences in priorities about who to save. This is the most valuable and important aspect of the study.
Finish and Japanese people more often chose to kill people who are jaywalking compared to those from Nigeria or Pakistan. The Fins also showed little preference between homeless people and executives, whereas Colombians favored killing persons of lower status. The gallant French were more likely to save women over men.
This doesn’t mean that the survey helps us to program autonomous cars to act ethically, and I will soon explain why not. The survey makes signification contribution to an old philosophical thought experiment with roots going back to a 1905 survey by Philosophy Professor Frank Chapman Sharp at the University of Wisconsin: a fast approaching train with hundreds of passengers was hurtling them towards their death unless a watching man switches the train to another track. The dilemma is that his young son was playing on the other track and would be killed if he pushed the switch.
In the more modern form of the moral dilemma, the UK philosopher Philipa Foot, in 1975, used trollies instead of trains. The basic problem is that a trolley is speeding down a track and will kill five workers unless you push a switch to send it down a spur and kill one worker instead. What do you do? If you’ve watched ‘The Good Place’ on Netflix (Series 2 Episode 5) you’ll know many of the variants of this dilemma and the consequences in their full gory detail.
Judging by the MIT study, most will say that you should switch the tracks and kill one individual. But perhaps it requires a little more deliberation than offered by the study. As the tech philosopher Patrick Lin told Scientific American, “If you had to choose between two evils, and one is killing and the other is letting die, then letting someone die is a lesser evil—and that’s why inaction is okay in the trolley problem.”
If you’re scratching your head right now, you may need a few more examples to grab this point. Imagine that instead of being by the track you are on a bridge and can see that the trolley is fast approaching five people on the track. There is a large overweight man looking over the bridge. You can stop the trolley by pushing him to his death. This may be a step too far for many of you.
If you still think that one life for five is reasonable, try this one from the US moral philosopher Judith Thomson in 1985:
“A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveller, just passing through the city the doctor works in, comes in for a routine check-up. In the course of doing the check-up, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor. Do you support the morality of the doctor to kill that tourist and provide his healthy organs to those five dying persons and save their lives?”
Most of you will waiver at this point and probably say, “no”. But this is really not any different from the original one-for-five trolley problem. So maybe we shouldn’t get caught in a tangled web of moral philosophy when our concerns are focussed on the issue of real deaths caused by self-driving cars. The fact that the moral decisions of mere mortals can be swung according to context, should give us pause to think about the limitations of the 13 decisions presented in the MIT survey.
Germany views human dignity and the right to life as paramount
The German Federal Government’s ethics commission for autonomous vehicles has taken a strict rule-bound position in their guidelines: “In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible”. Others may feel the same way but there was no way to indicate that in the forced choice study.
Germany’s decision is in line with Article 1 of their Basic Law in which human dignity is inviolable. To respect and protect it is the duty of all state authority. In 2005 the German Federal Constitutional Court had the opportunity to test Article 1 in a decision about a genuine instance of a real-life lethal trolley problem.
Imagine a plane hijacked by terrorists is flying towards a highly populated area to crash. The air force can either shoot the plane down over a less populated area (like switching the trolley lines) or let it proceed to kill many people. According to the MIT survey, most people would choose to shoot the plane down and save more people. But the Constitutional Court ruled that shooting down the plane would be incompatible with the constitutional right to life and the right to human dignity.
They reasoned that it would turn passengers and crew, who are victims of a hijacked plane, into objects. If their deaths were used to save others they would be reduced to mere things at the pleasure of the state. Further, they asserted that arguing that the passengers would die anyway is invalid because human lives deserve protection regardless of the expected duration of their existence.
Will the survey help to ethically code vehicle responses?
The short answer is ‘no’ for some obvious reasons. For starters, the forced choice between two options in the moral judgment task is much too simplistic to be useful for the very large number of circumstances in which accidents can occur on the roads. The real world presents many more options such as swerving onto the pavement or braking by scraping along a crash barrier and other more strategic ways to save lives.
We also saw from the ‘overweight man on the bridge’ and the ‘transplant surgeon’ examples that inaction may be the better course of action. Just because there were similarities in the responses of millions of people for on a morally restricted task, doesn’t mean that it tells us the right moral values to program into a robot car. Being killed or injured accidentally is not the same as being selected as a target by calculations on a computer.
Essentially the car’s control system would turn it into an autonomous weapon by using a computer sensing system to determine which target to kill. This may well be a violation of our fundamental human right to life as specified in Article 2 of the Universal Declaration of Human Rights and the European Charter of Human Rights. Governments have a duty to prevent foreseeable loss of life and should not allow self-driving cars to turn into weapons systems whenever they enter the scene of an accident.
And what about the car killing its passengers. That is not the best business model for selling cars. Who in their right minds would want to buy a car that could kill them rather than prioritize saving their life? According to the survey results, the car would kill its overweight passenger rather than risk crashing into a thinner pedestrian. No thank you!
But there are much bigger problems that stab a dagger into the heart of the MIT group’s reasoning. All of this discussion is meaningless without a magical new technology.
Where reality meets philosophy, the scientific and engineering challenges are enormous and innumerable. Self-driving cars rely on sensors such as cameras and Lidars to capture and process data about the surrounding area, obstacles, pedestrians and other vehicles. But car sensing systems are incapable of the fine-grained discriminations needed to distinguish between children, teenagers, athletes, executives, doctors, the homeless and grannies. That would require everyone to wear a transmitter giving their personal details to the car so that it could calculate exactly who to target. It could even transmit your social rating so that those with less ‘likes’ are toast.
In that case, the best advice for passengers would be to ensure that their fellow passengers were babies and young women. Pedestrians should always cross the road in groups and make sure that they are in the largest group. Other advice would be, don’t grow old, don’t walk to a fancy dress party in an animal costume and lose as much weight as you can.
Another problem is that accidents are not often static. Dynamic events unfold in time making them difficult if not impossible to predict. Cars are unlikely to have complete information about road surfaces, the depth of spillages or the weight and material of other vehicles. The activity going on behind other vehicles or pedestrians could be occluded from the sensors. Combined with incomplete sensing information, a car could make poor targeting choices and deflect into other vehicles or pedestrians. The dangers multiply when there are other self-driving cars involved that may have different priority settings.
And we can only guess at what malicious hackers might do. Or how other human drivers or pedestrians could game cars. Suppose that a gang wanted to kill a woman, they could run onto the road when she was crossing and have a self-driving car swerve into her.
The meme, the dream and the life-saving cars
There is a meme that was started by people working on the early development of self-driving vehicles that they will dramatically reduce road deaths. This may well be true eventually, but it is a hypothesis and not yet a fact. It would certainly be dumb to launch millions of self-driving cars on our roads tomorrow. That would simply add machine errors to human errors and create more road deaths. We are just not ready yet.
It is also a mistake to think that all autonomous car companies are equal or equally cautious as we have seen from recent fatalities. This is a new technology that could have massive positive benefits if we proceed with care and don’t rush it. Too many accidents at the beginning could turn consumers away and remove a potentially great technical innovation. Let us take it slowly and incrementally – there is no big rush.
The dream of massively cutting road deaths could happen but not with self-driving cars alone. It would require significant changes to be made to the infrastructure of the road systems. Many accidents could be avoided by having cars communicate with one another, having sensors along all of the roads to alert cars to upcoming dangers and by having centralized control so that cars can be slowed down or stopped as necessary to prevent accidents. We could build the capacity to do that over time.o be
It would be very expensive, but what cost can you place on saving lives? Nations have massive budgets for weapons and military developments to defend their citizens from being killed by outside forces. The US spends between 16% and 20% of its total budget on defense – around $650 to $700 billion per year. Yet car accidents take far more US lives than attacks from foreign powers or terrorists. Would it not be a rational move to take a good portion of the defense budget to defend us against death by car?