Philosophers like to make up stories, especially when they’re talking about ethics and morality.
These “thought experiments” are helpful tools to think through a problem. They help clarify what you really think about something.
If you’re debating whether or not lying is wrong, for instance, it can be helpful to consider a case in which someone wants information from you that they will use to hurt someone else. Would it be wrong to lie then? Or is lying okay if it prevents someone from getting hurt?
There are lots of famous thought experiments from the history of philosophy. But self-driving cars have made one of them especially famous. That’s because the engineers programming these new cars have to provide an answer to it.
The trolley problem
The thought experiment is called the trolley problem. An influential British philosopher named Philippa Foot wrote about it in 1967, and it’s been discussed extensively ever since.
The problem goes like this: a trolley (or a train car) is running down a railway track, out of control. It’s heading toward five innocent people who are stuck on the track and who will be injured by the trolley.
You’re standing near a lever that would change the trolley’s course onto a second track, avoiding the five people. However, there is one innocent person on that second track, and if you pull the lever he will be injured instead.
The ethical dilemma here is whether you should or should not pull the lever. On the one hand, it seems like it would be better to have one person injured instead of five. On the other, by the pulling the lever, you’re causing an injury to someone who doesn’t deserve it.
A problem that needs an answer
The trolley problem sounds like one of those ridiculous, far-fetched scenarios philosophers love to debate so much. But self-driving cars are making it much more common.
In programming the artificial intelligence that does the driving in a self-driving car, engineers have to create algorithms that tell it what to do. They write computer programs that tell the car to accelerate when it detects a green light, steer to the left if it’s too close to the right side of the road, and hit the brakes if the car in front stops suddenly.
As part of the programming, the engineers also have to tell the car how to respond to unexpected situations—like if five pedestrians were to jump out into the car’s path.
Actually creating the technology to teach the car to swerve and dodge those pedestrians is hard enough. But what if there was another pedestrian in the only other direction the car could go?
And now we’re back at the trolley problem. If self-driving cars are going to become a reality, the engineers have to answer that question, one way or another.
How do we answer it?
The trolley problem has been so popular for so long because it’s a very difficult problem to answer. There are lots of different ways to think about ethics and morality, and it doesn’t look like we’re going to all agree anytime soon. So what do we tell our future cars to do?
According to the website Quartz, philosopher Nicholas Evans, a professor at UMass Lowell, is one of the people working on that. But rather than try to convince everyone of a particular answer, he and his colleagues are simply trying to show the results brought about by different options
They are creating many different algorithms and running them through different versions of the trolley problem, showing the result in each case. What if it’s a choice between injuring a pedestrian and injuring the driver—is that different? Or injuring one person very badly versus injuring three people moderately—does that change what you think the car should do?
Evans is collecting all those answers in the hopes that we—the public—can be informed about what these new technologies will do. “Then we have to have a discussion as a society about not just how much risk we’re willing to take but who we’re willing to expose to risk,” Evans told the website.
Other ethical questions
But trolley problems aren’t the only ethical questions raised by self-driving cars.
Consider the impact they would have on the environment.
We’re living in a time of climate change. A recent report from the United Nations’ Intergovernmental Panel on Climate Change concluded that humans must reduce carbon emissions drastically in a little over a decade to avoid catastrophic changes.
At the same time, Time reported that self-driving cars “could reduce energy consumption in transportation by as much as 90%, or increase it by more than 200%, according to research from the Department of Energy.”
Where we fall in that big range depends on how we use these new vehicles. If self-driving cars make travel so easy that people start living farther away from their jobs, or taking more frequent trips to the store, or having their robot car drive in circles instead of paying for parking, we’ll be using a lot more energy than we do now, unless we make up for it with increased efficiency.
The question comes down to behavior, and how this new technology would impact us. Quartz pointed out many other potential problems with self-driving cars that follow the same lines:
There are concerns about advertising (could cars be programed to drive past certain shops?), liability (who is responsible if the car is programmed to put someone at risk?), social issues (drinking could increase once drunk driving isn’t a concern), and privacy […]. There may even be negative consequences of otherwise positive results: If autonomous cars increase road safety and fewer people die on the road, will this lead to fewer organ transplants?
None of these questions are easy. It’s also not always clear when engineers and car companies are responsible for thinking about them. But as self-driving cars come closer and closer to reality, those decisions become more and more important.