Ethically Ambiguous Self-Driving Cars

470

By Dr. Lance B. Eliot, AI Insider for AI Trends, regular contributor

In 2016 the self-driving car industry was rocked by the crash of a Tesla car that was on autopilot and rammed into a nearby tractor trailer, sadly killing the driver of the Tesla. This fatal collision gained national and global attention. Some wondered whether this was finally the tipping point that self-driving cars weren’t ready for the road and might spark a backlash against the rollout of self-driving cars. For several months, the National Highway Traffic Safety Administration (NHTSA) investigated the incident, utilizing their Office of Defects Investigation (ODI) to ascertain the nature of the crash and what role the human driver of the Tesla played and what role the Tesla Autopilot played.

The ODI announced its results and closed the case on January 19, 2017. Their analysis indicated that they did not identify any defects in the design of the system and that it worked as designed. A reaction by some was of shock and dismay. If the system worked as designed, does this apparently mean that the system was designed to allow it to kill the driver of a car by ramming into a tractor trailer? What kind of design is that? How can such a design be considered ethically okay?

Well, the ODI report explained that the reason the system was “cleared” involved the aspect that the system was designed in a manner that required the continual and full attention of the driver at all times. It was the ODI’s opinion that the driver presumably could have taken back control of the car and avoided the collision. Tesla therefore was off-the-hook and the tragic incident was essentially the fault of the driver since he failed to avert the collision.

For Tesla fans and for much of the self-driving car industry, there was a sigh of relief that the self-driving car was not held accountable and nor was the self-driving car maker held responsible for the crash. The self-driving car world got a get-out-of-jail-free card, so to speak, and could continue rolling along, knowing that as long as the system did as it was designed to do, and even if that meant that it either led into a crash and/or did not avoid a crash that it presumably might have been able to avoid, it nonetheless was not to blame in such a severe incident (or, for that matter, apparently for any incident at all!).

Some ethicists were astounded that the self-driving car designers and makers were so easily allowed to escape any blame. Questions that immediately come to mind include:

· Do self-driving car makers have no obligation to design a system that does not lead into a dire circumstance for the driver?

· Do self-driving car makers have no obligation to design a system that detects when a crash is imminent and try to then take evasive action?

· Can self-driving car makers shift all blame onto the shoulders of the human driver by simply claiming that for whatever happens the human driver was supposed to be in-charge and so it is the captain of the ship that must take all responsibility?

This also raises other ethical issues about self-driving cars that we have yet to see come to the forefront of the self-driving car industry. Those within the industry are generally aware of something that ethicists have been bantering around for nearly a hundred years called the Trolley problem. Philosophers and ethicists have been using the Trolley problem as a mental experiment to try and explore the role of ethics in our daily lives. In its simplest version, the Trolley problem is that you are standing next to a train track and the train is barreling along and heading to a juncture where it can take one of two paths. In one path, it will ultimately strike and kill five people that are stranded on the train tracks. On the other path there is one person. You have access to a track switch that will divert the train from the five people and instead steer it into the one person. Would you do so? Should you do so?

Some say that of course you should steer the train toward the one person and away from the five people. The answer is obvious because you are saving four lives, which is the net difference of killing the one person and yet saving the five people. Indeed, some believe that the problem has such an obvious answer that there is nothing ethically ambiguous about it at all. Ethicists have tried numerous variations to help gauge what the range and nature of our ethical decision making is. For example, suppose I told you that the one person was Einstein and the five people were all Nazi prison camp guards that had horribly gassed prisoners. Would it still be the case that the saving of the five and the killing of the one is so easily ascertained by the sheer number of lives involved?

Another variable manipulated in this mental ethical experiment involves whether the train is normally going toward the five people or whether it is normally going toward the one person. Why does this make a difference? In the case of the train by default heading to the five people, you must take an overt action to avoid this calamity and pull the switch to divert the train toward the one person. If you take no action, the train is going to kill the five people. Suppose instead that the train was by default heading toward the one person. If you decide to take no action, you have already in essence saved the five people, and only if you actually took any action would the five be killed. Notice how this shifts the nature of the ethical dilemma. Your action or inaction will differ depending upon the scenario.

We are on the verge of asking the same ethical questions of self-driving cars. I say on the verge, but the reality is that we are already immersed in this ethical milieu and just don’t realize that we are. What actions do we as a society believe that a self-driving car should take to avoid crashes or other such driving calamities? Does the Artificial Intelligence that is driving the self-driving car have any responsibility for its actions?

One might argue that the AI is no different than what we expect of a human driver. The AI needs to be able to make ethical decisions, whether explicitly or not, and ultimately have some if not all responsibility for the driving of the car.

Let’s take a look at an example. Suppose a self-driving car is heading down a neighborhood street. There are five people in the car. A child suddenly darts out from the sidewalk and into the street. Assume that the self-driving car is able to detect that the child has indeed come into the street. The self-driving car is now confronted with an ethical dilemma akin to the Trolley problem. The AI of the self-driving car can choose to hit the child, likely killing the child, and save the five people in the car since they will be rocked by the accident but not harmed, or the self-driving car’s AI can swerve to avoid the child but doing so puts the self-driving car onto a path into a concrete wall and will likely lead to the harm or even death of many or perhaps all of the five people in the car. What should the AI do?

Similar to the Trolley problem, we can make variants of this child-hitting problem. We can make it that the default is that the five will be killed and so the AI must take an action to avoid the five and kill the one. Or, we can make the default that the AI will without taking any action kill the one and must take action to avoid the one and thus kill the five. We are assuming that the AI is “knowingly” involved in this dilemma, meaning that it realizes the potential consequences.

This facet of “knowing” the ethical aspects is a key factor for some. Some assert that self-driving cars and their AI must be developed with an ethics component that will be brought to the fore whenever these kinds of situations arise. It’s relatively easy to say that this needs to be done. But if so, how will this ethics component be programmed? Who decides what the ethically right or wrong action might be? Imagine the average Java programmer deciding arbitrarily while writing self-driving code as to what the ethical choice of the car should be. Kind of a scary proposition. At the same time, we can also imagine the programmer clamoring for requirements as to what the ethics component should do. Without stated requirements, the programmer is at a loss to know what programming is needed.

Right now, the self-driving car industry is skirting the issue by going the route of saying that the human driver of the car remains the ethics component of a self-driving car. Presumably, until self-driving cars get to a true Level 5 of self-driving, meaning that the AI is able to drive the car without any needed human involvement, the existing human driver can still be the scapegoat. I would expect that some clever and enterprising lawyers are eventually going to question whether letting the AI off-the-hook and putting the blame entirely onto the human driver is reasonable, and whether self-driving cars at a level less than 5 can escape blame.

Self-driving car makers don’t either realize that they must at some point address these AI and ethics issues, or they are hoping it is further down the road and so no need to get mired into it now. With the frantic pace right now of so many companies striving to get the ultimate self-driving car on the road, their concern is not about the ethics of the car and focused instead on getting a self-driving car that can at least drive the car autonomously.

I see this as a ticking time bomb. The makers think that there is no need to deal with the ethics issues, or they have not even pondered it, but nonetheless it will begin to appear, especially as we are likely to see more fatal crashes involving self-driving cars. Regulators right now have been hesitant to place much regulatory burden onto self-driving cars because they don’t want to be seen as stinting the progress of self-driving cars. Doing so would currently be political suicide. Once we begin to sadly and regrettably see harmful car incidents involving self-driving cars, you can bet that the regulators will realize they must take action else their constituents will think they fell asleep at the wheel and will therefore be booted out.

The self-driving car ethics problem is a tough one. Makers of self-driving cars are probably not the right ones to alone decide these ethics questions. Society as a whole has a stake in the ethics of the self-driving car. There are various small committees and groups within the industry that are beginning to study these issues. Besides the difficulty of deciding the ethics to be programmed into the car, we need to also deal with who is responsible for the ethical choices made, whether it be the car maker, the programmers, or some say the car owner because they opted to buy the self-driving car and should have known what its ethics is.

And, suppose that there is agreement as to the ethics choices, and suppose you buy a self-driving car programmed that way, but suppose that the programming of the ethics component did not do as it was intended to do, imagine then having to investigate that aspect. This is a rabbit hole that we are headed down, and there is no avoiding it, so putting our heads in the sand to pretend that the ethics problem doesn’t exist or is inconsequential is not very satisfying. Simply stated, the ethically ambiguous self-driving car needs to become the ethically unambiguous self-driving car, sooner rather than later.

This content is original to AI Trends.