Trolley problems

Feb 03, 2019 09:40

The New Yorker on trolley problems in self-driving cars
Both Di Fabio and Shariff agree that the advent of autonomous vehicles will force us to make our underlying moral calculations explicit. In twenty to fifty years, the majority of cars on the road will likely be driverless. If billions of machines are all programmed to make the same judgement call, it may be a lot more dangerous to cross the street as, say, an overweight man than as a fit woman. And, if companies decide to tweak the software to prioritize their customers over pedestrians, it may be more dangerous to be beside the road than on it. In a future dominated by driverless cars, moral texture will erode away in favor of a rigid ethical framework. Let’s hope we’re on the right side of the algorithm.

Psychology's trolley problem might have a problem (Slate).
[In an experiment where people were asked would push a button to they give an electric shock to one mouse to prevent five mice being given the shock]: [In conversation, t]wo-thirds of Bostyn’s subjects said yes, they would indeed press the button in this scenario... [in practice, a]bout five-sixths of these subjects pressed the actual button, suggesting they were more inclined to make that choice in real life than their fellow subjects were in hypotheticals. Moreover, people’s responses to the 10 trolleyology dilemmas they were given at the start of the experiment-whether they imagined that they’d push the fat man off the bridge and all that-did not meaningfully predict their choices with live mice. Those who had seemed to be more focused on the greater good in the hypotheticals did seem to press the real-life button more quickly, though, and they described themselves as being more comfortable with their decision afterward.
The Trolley Problem Will Tell You Nothing Useful About Morality (Current Affairs).
The trolley problem is repulsive, because it encourages people to think about playing God and choosing which people to kill. .... It warps human moral sensibilities, by encouraging us to think about isolated moments of individual choice rather than the context in which those choices occur. ... And it encourages a kind of fatalism, where everything you do will inevitably be a disaster and moral questions seem hard rather than easy.

There are plenty of moral questions we don’t discuss nearly enough: Is there a moral obligation to help refugees? Is being rich in a time of poverty justifiable? Do you have an obligation to speak out about sexual harassment? What should you do if you know someone is being abused but they explicitly ask you not to say or do anything about it? Are there any justifiable reasons for the existence of borders? Does capitalism unfairly exploit workers? Should you lie to protect an undocumented person? ... One of the hardest moral quandaries is in determining what our priorities should be: in a world filled with a million injustices, do you just pick one at random to address? It’s only because we spend so little time thinking about which questions probably matter more than others that anyone can think trolley problems are a comparably effective use of time.
Because I work adjacent to self-driving car research I get asked about the trolley problem from time to time, and these were three interesting articles about it. (The New Yorker one has less of an agenda and is the best). The trolley problem is the hypothetical question: you see a streetcar out of control and careering down the street to where it will kill five people; if you pull a switch you'll divert it to where it kills one person; should you do it?

Thought 1: although the trolley problem is presented as a philosophical problem, it's not actually one. It's a question in psychology. It doesn't explore anything interesting about morality -- obviously the right thing to do is to make the choice that serves the greater good, if that's clear, and in the example above the right choice is to kill fewer people; what it does is put you face to face with your own squeamishness about directly taking action that kills someone as opposed to avoiding taking action resulting in the deaths of others. It's not about the right thing to do, it's about your relationship with your own culpability. It's interesting here that, per the Slate article, people's answers to the question posed hypothetically reflect a greater bias against taking action if it makes you culpable compared to people's behavior in real life, where in general if the greater-good choice is clear people will make that choice.

Thought 2: moving the trolley problem to the self-driving car space doesn't make it any more interesting philosophically. Human drivers make these decisions under these circumstances, and not very often. Self-driving cars will get in these situations even less often, and will just be programmed to mirror the decisions humans make.

(Thought 2a: There is an interesting effect of this -- self-driving cars will amount to an enormous transfer of liability from the driver to the car maker, and to the end of individual driving insurance. This means that the whole cost of insurance for the car over its lifetime needs to be bundled into the cost of the car itself. This will create enormous incentives to make sure self-driving cars are actually safer than people-driven cars so that the cost will be competitive. Note of course that if with self-driving cars we see a move towards less car ownership, the increased insurance will be paid on the trips rather than the car and the fleet operator may pay claims out of operating expenses rather than having insurance, but the fundamental point is the same -- potential accidents are a cost that has to be paid for and if the individual driver doesn't pay that cost directly, it still needs to be paid somewhere in the system).

The New Yorker article in particular has slightly changed my mind on this.

Thought 3: If you move the decision as to who to hit to an algorithm that's centrally developed and rolled out across the whole fleet, then any issues with that algorithm could result in significant changes to outcomes, compared to what happens currently. Basically, right now, reactions to trolley problem-like situatons are individual and have a large random component. Some people run over grannies, some people run over kids. But if the computer decides that everyone should run over grannies, the world does look different. So it's worth taking time to develop an algorithm carefully and not just reflect exactly what the median driver would do. (Or it's worth having a large random element in the algorithm).

Thought 4: The inverse of thought 3. It could be that there's already an undesirable bias in how people behave. The New Yorker article shows people in high-inequaltiy countries happier to hit homeless people than businessmen, people in Latin American countries happier to kill fat people than people out exercising. Is it right for algorithms to embed these biases? (We've already seen cases where "AI" algorithms built to make hiring decisions have just embedded existing human prejudices to bring in white males for interview, and Democratic Representative Alexandria Ocasio-Cortez was recently subject to some hilarious mansplaining for pointing this out). In fact, German law, the only law in the world on the topic, which states “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” (hat tip again to the New Yorker article).

Overall, then, I'm thinking that the trolley problem in self-driving cars is a more interesting engineering problem than I'd previously thought.

However, I do question the level of attention that the trolley problem gets in the literature and in journalism. It depends on a very specific set of conditions, where there are no good choices but you nevertheless get to make a choice. How often does that really happen? As self-driving cars get better and better, won't it happen even less in the future than it does now? Have you ever read about an accident where someone says "I swerved to avoid the baby ducks even though I knew I'd hit that fat person"? Surely the point of self-driving cars is to make accidents so rare that what you happen to be programmed to do in the trolley scenario simply doesn't matter. As the (somewhat polemical) Current Affairs article puts it, "It’s only because we spend so little time thinking about which questions probably matter more than others that anyone can think trolley problems are a comparably effective use of time."

Maybe that's the real lesson of the trolley problem. If you want people to talk about your ideas, don't worry about whether the ideas are significant. Just try to make them sexy.

(PS -- Hello again Livejournal! Yes, I am posting here because it's virtually the same as making a private note.)
Previous post
Up