I'm no expert in the topic; not only do I not have a degree in philosophy, I have a way of seeing everything in so many different shades of gray that most of the time it's hard for me to make a decision regarding my own ethical standards. I still love the topic for a number of reasons -- because it brings up issues that the students themselves often haven't considered, because it provokes fantastic class discussions, and because it appeals to the risk-taker in me. I seldom ever know where the discussion is going to go ahead of time.
We usually start the unit with some exercises in ethical decision-making, presented through a list of (admittedly contrived) scenarios that force the students into thinking about such issues as relative worth. Examples: there are two individuals who are dying of a terminal illness, and you have one dose of medicine that can save one of them. Who do you save? What if it's two strangers -- what more would you need to know to make the decision? A stranger and a family member? (This one results in nearly 100% consensus, unsurprisingly.) A stranger and your beloved dog? (Are bonds of love more important, or is human life always more valuable than the life of a non-human animal?) And for the students who say they'd always choose a human life over their dog's... what if the human was a serial killer?
Some students are frustrated by the hypothetical nature of these questions, although the majority see the point of considering such issues. And there are situations in which such decisions need to be thought through beforehand -- such as in the case of self-driving cars.
Self-driving cars are an up-and-coming technology, designed to eliminate cases of human-caused automobile accidents (caused by fatigue, impairment, loss of attention, or simply poor driving skills). And while a well-designed self-driving car would probably eliminate the majority of accidents, it does bring up an interesting ethical dilemma with respect to how they should be programmed in the case of an unavoidable accident.
Suppose, for example, there are three pedestrians in the road at night, and a self-driving car is programmed to swerve to miss them -- but swerving takes the car into a wall, killing the driver. In another scenario, a truck is in the lane of an oncoming self-driving car, and in order to miss colliding with the truck, the car has to cut into the bike lane -- striking and killing a cyclist. How do you program the car to make such decisions?
Google's Lexus RX 450h Self-Driving Car [image courtesy of the Wikimedia Commons]
The results were fascinating, and illustrative of basic human nature. Over 75% of the respondents said that a self-driving car should be programmed to minimize casualties, even if it meant that the driver died as a result. It's a variant of the trolley problem -- more lives saved is always better than fewer lives saved. But the interesting part came when the researchers asked respondents if they themselves would prefer to have a car that was so programmed, or one that protected the driver's life first -- and the vast majority said they'd want a car that protected them rather than some random pedestrians.
In other words, saving lives is good, provided that one of the lives saved is mine.
"Most people want to live a world in which everybody owns driverless cars that minimize casualties," says Iyad Rahwan, a computer scientist at MIT who co-authored the paper along with Bonnefon and Azim Shariff of the University of Oregon, "but they want their own car to protect them at all costs... These cars have the potential to revolutionize transportation, eliminating the majority of deaths on the road (that's over a million global deaths annually) but as we work on making the technology safer we need to recognize the psychological and social challenges they pose too."
You have to wonder how all of this will be settled. While driverless cars have the potential to reduce overall accidents and automobile fatalities, the programming still requires that some protocol be determined for decision-making when accidents are unavoidable. Myself, I wouldn't want to be the one to make that call. I have a hard enough time making decisions that don't involve life and death.
But it does give me one more interesting ethical conundrum to discuss with my Critical Thinking classes next year.
I love this question! I wonder if it becomes less of a dilemma once ALL the cars on the road are self driving? What would that take? I for one, would LOVE to see tractor trailers automated so they were more like trains (and safer!)ReplyDelete
Would there be more of a hive-mind to help with mitigating these no-win situations? Less self-sacrifice?
We don't always do the best job at predicting our own future desires. I wonder whether anyone has studied the extent to which people feel guilt in cases where they survive at the expense of other deaths.ReplyDelete
Since survivor guilt is a real problem even in cases where the survivor is merely lucky and the other people would've died anyway, I wonder how much worse it would be if they had good reason to believe that, for instance, four other people would be alive if the firefighters hadn't wasted so much time rescuing them first. Of if they've steered their car into a bunch of pedestrians to avoid the big mashy truck coming at them in the wrong lane.