Not Fast Enough: Human Factors in AI Self-Driving Cars for Control Transitions

664

By Dr. Lance B. Eliot is the AI Insider for AI Trends and contributes regularly.

You are driving your car and suddenly a child darts into the street from the sidewalk. You see the child in the corner of your eye, your mental processes calculate that the car could hit the child, and you then realize you should make an evasive move. Your mind races as you try to decide whether you should slam on the brakes, or swerve away, or both, or maybe instead try to speed-up and get past the child before your car intersects with him. As your mind weighs each option, your hands seemingly grab the steering wheel with a death-like grip and your foot hoovers above the accelerator and brake pedal, awaiting a command from your mind. Finally, after what seems like an eternity, you push mightily on the brakes and come to halt within inches of the child. Everyone is Okay, but it was scary for driver and child.

How long did the above scenario take to play out? Though it took several sentences to describe and thus might seem like it took forever, the reality is that the whole situation took just a few seconds of time. Terrifying time. Crucial time. If you had been distracted, perhaps holding your cellphone in your hand and trying to text a message to order a pizza for dinner, you would have had even less time to react. Driving a car involves lots of relatively boring time, such as cruising on the freeway when there is no other traffic, but it also involves moments of sheer terror and second-by-second split-second decision making and hand-foot coordination.

This ability to react to a driving situation is an essential element of AI-based self-driving cars, specifically self-driving cars that are relying on human drivers to help out (there are some self-driving cars that intend to remove human drivers entirely out-of-the-loop, but most are not, at least right now). For self-driving cars that expect the human driver to be ready to take over the controls, the developers of such self-driving cars had better be thinking clearly about the Human Computer Interaction (HCI) factors involved in the boundary between human drivers and AI-automation driving the car.

Suppose that an AI-automation was driving the car in the above child-darts-into-street scenario. Perhaps the AI-automation is “smart” enough to make a decision and avoid hitting the child. But, suppose the AI-automation determines that it is unable to find a solution that avoids hitting the child, and so it then opts to hand over the controls to the human driver. Depending upon how much time the AI-automation has already consumed, the time leftover for the human driver to comprehend the situation and then react might be below, maybe even far below, the amount of time needed for the human mental calculations and hand-foot processes to be performed.

A recent study by Alexander Eriksson and Neville Stanton at the University of Southampton tries to shed light on what kinds of reaction times we’re talking about (their study was published in the Human Factors: The Journal of the Human Factors and Ergonomics Society on January 26, 2017). They undertook a study using a car simulator, and had 26 participants (10 female, 16 male; ranging in ages from 20 to 52, with an average record of 10.57 years of normal driving experience) try to serve as a human driver for a self-driving car. In this capacity, the experiment’s subjects sat awaiting the self-driving car to hand over control to them, and they then had to react accordingly. The simulation pretended that the car was going 70 miles per hour, meaning that for every second of reaction time that the car would move ahead by about 102 feet.

They setup the scenario with two situations, one wherein the human driver was focused on the self-driving car and the roadway, and in the second situation they asked the human driver to read passages from the National Geographic (now that’s rather dry reading!). In the case of the non-distracted situation, the humans had a median reaction time of 4.56 seconds, while in the distracted situation it was 6.06 seconds. Though it is expected that the reaction time for the distracted situation would be longer, it is also somewhat misleading to focus solely on the reaction times. I say this because the reaction time was how long it took for them to take back control of the car.  Meanwhile, the time it took for them to take some kind of action ranged from 1.9 seconds to 25.7 seconds.

Let me repeat that last important point. Taking back control of a self-driving car might be relatively quick, but taking the right action might take a lot longer.  Regardless though about the right action, notice that it took about 5-6 seconds to even take over manual control of the car. That’s precious seconds that could spell life-or-death (and a distance of roughly 500-600 feet at the 70 mph speed), since either a collision or incident might happen in that time frame (or distance), or it might mean that the time now leftover prior to a collision or incident is beyond your ability to avert the danger. We should also keep in mind that this was only in a simulated car. The participants were likely much more attentive than they would be in a real car. They knew they were there for a driving test of some kind, and so they were also on-alert in a manner that the everyday driver is likely not. All in all, the odds are that any similar study of driving on real roads would discover a much longer reaction time, I’d be willing to bet.

Let’s consider some of the salient aspects of the Human Factors Interaction involved with a self-driving car and a human driver:

No Viable Solution. If the AI-based system of the self-driving car cannot arrive at a solution to the driving problem, it could mean that there just isn’t any viable solution at all. Thus, handing the car driving over to the human is like saying, here, have at it, good luck pal. This is a no-win circumstance. The human driver is not really being given an option and instead simply being passed the buck.

Hidden Problem. The AI-based system might “know” that a child is darting from the sidewalk, but when it hands control over to the human the question arises as to how the human will know this. Yes, the human driver is supposed to be paying attention, but it could be that the human driver cannot see the child at all (suppose the AI-based system used a radar capability, but that visually the child is unseen by the human).  In essence, these self-driving cars are not giving any hints or clues to the human driver about what has caused the urgency, and it is up to the human driver to be omniscient and figure it out.

Cognition Dissonance. This is similar to the Hidden Problem, in that the context of the problem is not known by the human, but suppose the human makes an assumption that the reason the self-driving car is handing over the control is because there is a trash truck up ahead that needs to be avoided, and meanwhile it is actually because the car is about to hit the child. There is a gap, or dissonance, between what the human is aware of and what the AI-based system is aware of.

Reaction Time. We’ve covered this one already, namely, the amount of time needed for the human to regain control of the car, plus the amount of time needed for the human to then take proper action. The AI-based system has to hand-over control with some semblance of realizing how much time a human might take to figure out what is going on and also have time to still be able to take needed action.

Controls Access. A human driver might have put their feet aside of the brake and accelerator, or might have their hands reaching behind the passenger seat to grab a candy bar. Thus, even if they are mentally aware that the self-driving car is telling them to take the controls, their physical appendages are not able to readily do so. This is a controls access issue and one that should be considered for the design of self-driving cars in terms of the steering wheel and the pedals.

False Reaction. This is one aspect that not many researchers have considered and certainly none or seemingly none of the self-driving car makers seem to have been contemplating. Here’s the case. You are a human driver, you get comfortable with a self-driving car, but you also know that at some random moment, often when you least expect it, the AI-based system is going to shove the controls back to you. As such, for some drivers, they will potentially be on the edge of their seat and anxious for that moment to arise. This could also then cause eager-beaver drivers to take back control when the AI-based system has not alerted them, and the human might make a sudden maneuver because they think the car is headed towards danger.  The human is falsely reacting to an unannounced and non-issue.  The human could dangerously swerve off the road or flip the car, doing so because they thought it was time to take sudden action.

CONCLUSION

Overall, the rush toward self-driving cars is more so focused on getting the self-driving car to drive, rather than also focusing on the balance between the human driver and the AI-based system. There needs to be a carefully thought through and choreographed interplay between the two. When a takeover request is lobbed to the human (these are called TOR’s in self-driving parlance), there needs to be a proper allocation of TORLT (TOR Lead Time). Without getting the whole human-computer equation appropriately developed, we’re going to have self-driving cars that slam into people and the accusatory finger will be pointed at a human driver, which, might be unfair in that the human might have actually been attentive and willing to help, but for whom the self-driving car provided no reasonable way to immerse the human in helping out.  We can’t let the robots toss a live hand grenade to a human. Humans and their alignment with the AI-based computer factors will be vital for our joint success. Think about this the next time you are the human driver in a self-driving car.