Humans Colliding with Self-Driving Cars: Who’s on the Defense?

233

By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

Those darned human drivers! Yes, that’s what the self-driving car community likes to say. They love to put the blame onto those wild and crazy human drivers, especially whenever there is a collision between a self-driving car and a human driven car. Case in point is the Arizona accident in March 2017 that involved a human driven Honda CR-V that collided with an Uber self-driving car. This is a worthy example to take a close look at and be able to predict the future widespread advent of the interaction between self-driving cars and human driven cars. As you will see, in my humble opinion, self-driving cars and the makers of self-driving cars have to wake-up and realize that they cannot simply try to finger point at human drivers. It is the easy way to cop out of ensuring that self-driving cars are truly able to self-drive a car.  So far, the public and regulators are so agog over the wonderfulness of self-driving cars that they are buying into the simplistic excuse that human drivers are bad drivers.  Eventually, a threshold of pain is going to be reached and the tide will turn against the self-driving cars that aren’t able to drive defensively in the same manner that we expect of human drivers.

According to the Tempe, Arizona police investigative report, the human driver of the Honda was trying to make a left turn in an intersection, doing so across three lanes of opposing traffic.  In a classic but certainly dangerous maneuver, the human driver observed that the two closest opposing lanes of traffic were backed up with cars, and so the human driver thought they would go ahead and make the left turn into the open gap. Meanwhile, the Uber self-driving car was coming along in the further third lane, and so the Uber car proceeded through the intersection and the human driver rammed into it while making the risky left turn. I think we would all agree that humans that try to make such a left turn are basically risking themselves and others, and it is a type of turn that should be avoided. Sadly, I see human drivers trying this every day, and though they think it is worth the chance, I say they are stupid to do so and that by waiting more patiently they can make the turn without risking everyone. Their impetus to make a risky turn is bad judgement. Period.

That being said, it turns out that the intersection traffic signal had gone to yellow. This of course means that the traffic flowing down that third lane should be cautious as it enters into the intersection. We all know that many people will suddenly floor the accelerator and try to leap through an intersection when the light has gone yellow. Uber claims that the self-driving car did not try to speed-up and jam through the intersection. Instead, they claim that the Uber Volvo was doing 38 mph and was maintaining a constant speed. They also point out that the 38 mph was below the posted speed limit and so the Volvo was not speeding and nor speeding up to make it through the intersection.

The human driver in the Honda plowed into the Uber Volvo and hit so hard that the Volvo veered off into a traffic pole and then flipped onto its side.  The Uber engineer that was in the Volvo was miraculously okay and so were all the other humans involved. This could have been a really bad outcome, as you can imagine, given that the occupants in the Volvo could be injured or killed, and the Honda occupants likewise. There could have been a domino effect too, meaning that the Honda could have careened into other cars, and the Volvo could have careened into other cars. In addition, since it involved an intersection, there could have been pedestrians nearby that all could have been injured or killed.  By luck, none of that happened, but it could have easily happened and we see in the news this kind of horrific accident involving all human-driven cars all the time.

Let’s now unpack what happened in Tempe, Arizona. First, the police say that they are holding the human driver of the Honda at fault, and that the Uber Volvo self-driving car was not at fault. Generally, I think our intuition about driving and the rules of the road allow us to believe that this makes perfectly good sense. The human driver of the Honda took a great risk by making the left turn, and legally they are responsible to only make a left turn when it is safe to do so. Obviously, the outcome shows that it was not a safe maneuver. The Volvo self-driving car had the right to pass through the intersection, regardless whether the light was green or yellow.  Had the intersection gone to red, we’d be discussing things differently about why the Volvo went into an intersection that had a red light, and we’d be debating the role of the human driver versus the self-driving car.

Does that mean there’s nothing else to discuss?  No, not by a long shot.  The yellow light of the intersection was presumably viewable by the Uber Volvo self-driving car as it approached the intersection. We all know that you need to gauge your distance to the intersection, your speed, and your ability to make it into and clear the intersection before it goes red. We also all know, as experienced human drivers, that we need to drive defensively and be watching for idiot drivers that are trying to do risky maneuvers.  Would a human driver in that Volvo have made the same judgement as the self-driving car and proceeded into that intersection, or would it have decided to brake and instead come to a halt? Or maybe at least slow down and see whether it was truly safe enough to go into the intersection?  Especially since the nearby traffic going in the same direction had come to a near halt, which usually we know by experience means that you should be especially watchful of other cars that think this means everything has come to a halt.

We all know too that when you come up to a light that if you suddenly brake or slowdown that there is a danger that an idiot driver behind you might ram into you. They are often not paying attention and so they plow into the back of your car. Or, they are looking at the intersection light and upon seeing it go yellow they want to speed-up, and if you are slowing down it just irks them and they have decided to go faster at the same moment you have decided to go slower. It’s a bad combination.

Why didn’t the engineer in the Uber Volvo self-driving car take over the controls and provide that kind of human judgement into the situation?  The engineer indicated it happened all too fast. I’ve been pounding away in my series on self-driving cars that this is exactly the problem with self-driving cars that are intended to hand controls over to humans – doing so at the last moment is a false sense of hope and actually often makes things worse. I’ll keep making this point until someone hears me emphasizing this.

The point about the AI aspect of the Uber self-driving car is that we really don’t know if it had any of this kind of “intuition” about driving a car. I am betting that it simply saw via cameras and radar that the intersection was open and it saw that the yellow light meant it could proceed.  I doubt it did much calculation as to the chances of truly clearing the intersection, and I am willing to bet for sure that it did not calculate the aspect of the other lines of cars and the experience laden notion that cars often try to make a left turn across stalled lines of traffic.

Where was the defensive driving of the Uber self-driving car? There seemed to be none. It didn’t speed-up and it didn’t slow down. It was like the proverbial old granny or grandpa stereotype of proceeding along, merrily, not a care in the world, and just opted to drive right through the intersection. You might say that well, Lance, the Uber self-driving car wouldn’t need to drive defensively if we had all self-driving cars on the road, because presumably if the Honda was a self-driving car it would not have attempted to make the risky turn.  Hogwash!  First, we are going to have a mixture of human driven and self-driving cars for many years to come.  I have repeatedly warned that there is not going to be an overnight magic wand that turns all cars into self-driving cars.  Second, we don’t really know what a self-driving car would have done had it been driving the Honda.  If the self-driving car was using machine learning, it might have made similar attempts successfully and thus believe that making the left turn was fine to do, and proceeded exactly in the same manner as the human driver (or, possibly even worse!).

Some witnesses at the scene of the accident say that they thought the Volvo self-driving car did speed-up.  Are we to believe Uber that the self-driving car was maintaining speed?  So far, it seems like there isn’t going to be any further probe into the matter.  Wouldn’t it be helpful to all self-driving car makers to know more about how this incident occurred, and therefore hopefully be developing their self-driving cars to avoid making the same mistake?  You would hope so, but for now, it is unlikely. The excitement and giddiness over the advent of the self-driving car has provide a cloak of supremacy and no one seems to want to dare question what a self-driving car was doing or not doing.

Clearly, if self-driving cars are going to be on the roads, we must expect them to be able to drive defensively. They cannot just move in and around us and not know how to be watchful for the foibles of other drivers. Self-driving cars need to anticipate what human drivers will do. They need to anticipate what other self-driving cars will do.  And, keep in mind that not all self-driving cars are doing the same thing. Each self-driving car maker is making their own version of a self-driving car, and so whatever one self-driving car can or cannot do has nothing to do with other self-driving cars.  Just as human drivers are distinctive and drive in an individualized manner, so are the various brands of self-driving cars.  This regrettably also means that there is no collective learning across all self-driving cars, which, though maybe scary in a big brother way, it would at least potentially allow all self-driving cars to benefit from the experiences of each other.

If self-driving car makers aren’t going to infuse defensive driving tactics into their AI, we can likely expect the number of collisions and the human toil to increase. Waiting for machine learning to gradually figure out these circumstances is not the only way to do this. Any experienced driver can tell you about the myriad of ways that they drive defensively. This is both learned and also taught. Most of the self-driving car makers are struggling with just getting their cars to do the usual rules of the road. Venturing into advanced driving techniques such as defensive driving is not something they are yet focused on. In essence, most of the self-driving cars today are about at the level of a high school student that is just learning to drive a car.  I’d say it is worse though in many respects because the high school student has the human sensory aspects that the self-driving car lacks, and the high school student has human judgement, even if only that of a naïve teenager.

Defensive driving needs to be more than just avoiding hitting other cars. We need our self-driving cars to be defensive in other ways, such as avoiding hitting motorcyclists. I can vouch that on my daily commute on the freeway, if I strictly drove according to the stated rules of the road, I’d be hitting motorcyclists constantly as they weave into and out of traffic. It takes proper defensive driving tactics to make sure you don’t create an accident. For example, all a human car driver would need to do is suddenly switch lanes and then a motorcyclist coming up at a heightened rate of speed would ram right into your car.  I’ve seen this happen to other drivers about three times in the last month alone. It’s scary and avoidable. Yes, you can say that perhaps the motorcyclist was reckless, but in the end, reckless or not, the injury and death was brought about by the car that wasn’t driving defensively about the reckless motorcycle driver.  Fault is one issue, having injury or death is another.

Self-driving cars need to watch out for pedestrians too. I’ve mentioned in my column on the fatalities of today’s human driven cars that pedestrian injuries and deaths due to car accidents has continued to rise. Some believe that self-driving cars will reduce or eliminate these pedestrian incidents. I pointed out that this is not necessarily true, and that we might actually see a further rise once we being to see more self-driving cars on the roads.  Self-driving cars can also hit animals, and again this requires defensive tactics to avoid the sudden crossing of a deer or a dog that is chasing a cat across the street. How many self-driving cars are programmed for avoiding that kind of collision?  Right now, nearly none.

Those that are developing the AI systems for self-driving cars need to go beyond the aspects of just having automation that drives a car. I agree that being able to have AI that drives a car is huge and a tremendous accomplishment. If the self-driving car was limited to a special track and was guaranteed that there weren’t any humans around or near it, I would say that driving a car by AI is sufficient. But, when self-driving cars are allowed among us humans, they must be equipped with defensive driving capabilities. They need to use their sensors to detect circumstances that require specialized driving tactics. They need to be proactive and anticipate dangerous situations. They need to be embodied with practices that can quickly assess a situation and take a course of action to minimize or avoid calamity. This is not easy, but it is a must. It is a must because we cannot assume that an engineer sitting in a several ton vehicle, a potential killing machine, can suddenly take over the controls and avoid collisions. I want self-driving cars as much as anyone else, but not if they are essentially guided missiles that mindlessly are going to be “not at fault” by abiding by strict rules of the road. Let’s get some intuition into them and make them savvy AI-based drivers.

This content is original to AI Trends.