Pied Piper Approach to Car-Following with Self-Driving Cars

203

By Dr. Lance B. Eliot, the AI Insider for AI Trends and a regular contributor

Two major driving techniques are being used by today’s self-driving cars; namely they look for lane markings to gauge where the road lane is, and they look at the car ahead to gauge appropriate forward motion. Let’s consider these two techniques. Imagine that you are driving on the freeway and want to do the minimal effort needed to drive among the other cars that are the freeway with you. If you can identify the lane markings that are to the left and right of your car, and if you stay within those lane markings, you are able to proceed within a particular lane of the freeway.  It is like being on a railroad track, except that you are merely using the lane boundaries as “virtual rails” to guide how you proceed. This provides you with a path.  The other thing you need to know is how fast you can go. If you follow whatever car is ahead of you, and match their speed, you then have all that you need to get going. You have a path and you know how fast or slow you can go.

Suppose there isn’t a car ahead of you? Well, you could proceed up to the maximum speed limit allowed. So, there you are, your car is on the freeway and you want to start on your journey. You look for and find the lane markings of your lane, and you stay within those, and then you accelerate until you reach the maximum allowed speed limit, but then go slower or faster depending upon the car ahead of you.  As the car ahead of you slows down, you slow down. As the car ahead of you speeds up, you speed up. Often, a novice teenage driver that is learning to drive will use this same approach. It is a very simple way to drive.  It works much of the time. You can strip away all the other cognitive capabilities of human driving and in many respects drive a car with just these two techniques of lane detection and car-following detection.

Does this always work? Absolutely not. There are limitations and dangers to both of these techniques. Furthermore, the combination of the two techniques is only a simplistic means of driving a car and can readily become fallible in everyday driving situations. The true Level-5 self-driving car could not be a self-driving car if it used only these two approaches. The lesser levels of Level 2 through 4 make use of these techniques and require that a human driver be ready to take over control of the car, precisely because these two techniques are insufficient to automatically have AI drive a car. The two techniques are deceptively convincing when you see today’s self-driving car videos (see my recent column about sizzle reels of self-driving cars).  Do not be fooled.  These monkey-see monkey-do kinds of AI driving techniques are extremely brittle, which means that they only function in narrow driving circumstances and once you change an aspect of the driving circumstance then these techniques fall apart for driving the car.

Let’s see why these techniques are considered brittle. Suppose you are looking for lane markings and it turns out that the roadway is being repaired, thus, the lane markings are missing for stretch of the road. Human drivers usually are able to adjust to the situation and still pretty much make lanes, even if the pavement doesn’t show it. For AI systems, this can be hard to do and they are apt to wander outside of the non-existent lanes that others are imagining are there. There are also cases of lane markings that have been placed incorrectly and so confuse the lane path. There are lane markings that are duplicative in that there might be old lane markings and new lane markings, so you have to decide which to follow. There are lane markings that are raised and readily apparent, but there are also lane markings that consist of flat painted indications that are faded and not so easy to see.

I’d like to though focus more so on the car-following technique and discuss how this kind of pied piper approach can be brittle too. With the car-following technique, you need to detect whether there is a car ahead of you. How far ahead of you is that car? What is its speed? You presumably want to keep a distance between you and the car ahead that is approximately at least the minimum stopping distance for the speed that you are going. If you are going along at 70 miles per hour, and if the car ahead is some X number of feet ahead of you, you can calculate how much distance you need to have in order to safely stop your car, if needed because the car ahead comes to a stop. This distance adjusts continually as you drive. You need to also include into your calculations the speed of the car ahead and how fast it can come to a stop.

We all do this kind of mental calculation every time we drive a car. A teenager that is learning to drive a car is warned to keep a certain number of car length distances away from the car ahead. You can see them driving on the freeway because they are the only car that has that open length ahead of them. Seasoned drivers tend to ignore the cautionary car-following distances and instead butt right up to other cars.  This eliminates or certainly reduces available reaction time and more than likely means that if the car ahead slams on its brakes, the car-following is going to slam right into that car. We all routinely have that risk as we drive on the freeways today. Very few human drivers try to keep a proper distance from the car ahead.  It is a game of chicken that we all play, each day.  A novice driver discovers quickly that if they try to maintain the proper distance they will get honked at, and also other cars will jump into the open space and just keep cutting down the available stopping distance anyway.

For today’s self-driving cars, they are programmed to be like that novice teenage driver. The AI is purposely attempting to maintain a proper distance for the stopping rules charts. This is impractical for driving among human drivers, since the human drivers are not going to usually abide by proper stopping distances.  You can often spot a self-driving car by its distance following attempts, which can be just as obvious as when a teenage driver tries to do the same.

This raises an interesting and crucial question for self-driving car makers. Do you allow the AI to cut down on the proper amount of car-following distance, and thus violate the advised practices of safe driving? If so, and if that self-driving car gets into an accident because it lacked a properly designated safe distance, should the self-driving car maker be considered at fault?  Lawyers are certainly going to make that argument and loudly proclaim that the self-driving car was driving in an unsafe manner. On the other hand, if the self-driving car is programmed to always and only drive with the safe distance approach, it can make it nearly impossible for the self-driving car because it is going to continually do a see-saw of trying to adjust to the car ahead and keep a driving distance that other nearby human drivers will continually violate.

This is why there are some purists of self-driving cars that bemoan the human driver being on the road. These purists point out that if we could make all cars become self-driving cars, the self-driving cars would all react to each other in the proper way, i.e., they would maintain proper driving distances. Indeed, one of the touted great advantages of self-driving cars is that they will be able to do a piped piper of long chains of car-following.  Trucking companies are going to use car-following to guide many trucks on the same route, almost like putting train cars together in a long train.  This also allows for greater fuel efficiency since the trucks can draft off each other. This all sounds good, but we are a long, long time away from having only and all self-driving cars and trucks on the roadways.  The utopian world that these purists are dreaming about won’t be here for decades.

Let’s also consider some other drawbacks of the simplistic car-following technique. Have you ever driven on the autobahn in Germany and seen or heard about the massive car accidents? These are frequently caused by the aspect that cars are speeding along at very high speeds and one of the cars suddenly takes a dive. Maybe the car has a sudden flat tire. Maybe the car hits debris on the road. Maybe the driver just goes nuts and unexpectedly slams on the breaks. In any case, the leading car that suddenly comes to a halt catches the follower cars unawares. One by one, the follower cars ram into the car ahead of them. This continues sometimes with dozens of cars all piling up.  In areas that have a lot of fog, you see the same thing happen. Cars ram into each other, not realizing that the car ahead has stopped or realizing it but not having time to react and avoid colliding with the car ahead.

The car-following technique has the potential for an adverse domino effect. Car after car can get wiped out, as they all ram into each other in a dominos-like way. In that sense, the car-following approach has both the positive of making it easy to drive and doing wind drafting off each other, but it also has the dangerous and severe consequences that you can have a lot of cars all become part of the same catastrophic accident.  Some self-driving car makers point out that with the advent of V2V (Vehicle-to-Vehicle) communications, cars would be able to better protect against these domino effect accidents. Yes and no. Certainly having an ability for the cars to “talk” with each other is going to help reduce the risks, but it does not eliminate the risks. We still need to be aware of the dangers of the domino effect.

In terms of the leader role in the car-following scheme, let’s consider some aspects of how this works:

1) Car leader maintaining speed.

If the car leader is maintaining its speed, this provides an easy effort for the car-follower since it can presumably aim to simply match the speed.

2) Car leader speeding up.

If the car leader is speeding up, the car follower presumably can and should do the same. The reason it should do the same is to maintain the distance between it and the car ahead.  With human drivers, suppose the car leader is now going faster than the allowed speed limit.  Should the self-driving car go as fast, but then be breaking the speed limit?

3) Car leader slowing down.

If the car leader is slowing down, the car follower would need to be slowing down, doing so at the same slowing down pace.  This again is to maintain the distance between them.  If this is a gradual slow down, then the car follower should also be doing the same gradual slow down.  Suppose though that the car behind the car follower is not also slowing down?  The car follower that is slowing due to the leader slowing can get hit from behind by a car that is not watching what is happening and runs right up upon the car follower.  This can easily happen and a self-driving car needs to anticipate this (see my column on the art of defensive driving for self-driving cars).

4) Car leader hits their brakes.

If the car leader suddenly hits their brakes, the car follower needs to do the same. That being the case, it is possible that the car follower might not come to a halt in time, and ram into the car leader. The car follower might need to take some other evasive action to avoid hitting the car leader, such as swerving around the car leader.  Self-driving cars need to have this capability and not just blindly try to halt in the same lane as the car leader.

Consider that for each of the four above scenarios, we pretended that the car leader was a human driven car. These scenarios actually should be viewed as circumstances that could involve any variation of human driver cars and self-driving cars.  In other words, we could have a car leader that is a human driven car, and be followed by a self-driving car. Or, we could have a leader that is a self-driving car that is being followed by a human driven car.  The utopian world of all self-driving cars would be the scenario of the car leader being a self-driving car and the car follower a self-driving car.  Today, we mainly have a car leader being driven by a human and the car follower being driven by a human.

In real-world circumstances, we are going to a combination of these scenarios as linked to each other. For example, we might designate that a human driver car is “HD” and a self-driving car is “SD” and thus have these configurations:

  1. a) Car leader HD: Car follower SD
  2. b) Car leader HD: Car follower HD
  3. c) Car leader SD: Car follower HD
  4. d) Car leader SD: Car follower SD

We can further extrapolate these to include more than just two cars. In a tighter notation, HD:SD would be a human driver car followed by a self-driving car, while HD:SD:HD would be a human driven car followed by a self-driving car followed by a human driven car.  This notation allows us to then consider the variants of what each action might occur at the lead car and then what the follower cars might or might not do.

Some say that we should have a kind of master control of all cars on the road. If there was some all-controlling master system it would be able to orchestrate all cars and we’d never have accidents. I am not so sure that this is true, and even if it could be done we probably should question whether we want a master control to run our lives.  The pied piper approach has some merits, but we need to be mindful of what it means for us all to be lemmings and whether that bodes well for us all.

This content is original to AI Trends.