Edge Problems at Core of True Self-Driving Cars: Achieving the “Last Mile”

515

By Dr. Lance B. Eliot the AI Trends Insider

Whenever there is a final piece to a puzzle that is very hard to solve, it often referred to as achieving the “last mile.” This colloquial phrase arose from the telecommunications industry and has been based on the notion that the final leg of reaching a customer is often the most costly and arduous to undertake. We might layout fiber optical cable underground along a neighborhood street, but then the real difficulty comes to extending it to reach each household on that block. Physically connecting to the customer premises becomes a logistically enormous problem and one that is not readily solved under reasonable cost and time constraints.

This same phenomenon is found throughout our daily lives. We can often get something 80% or 90% or even 99% done, and then get stuck at that final 20% or 10% or 1% at the end. Now, sometimes, that last piece is not necessarily overly essential and so whether you can get that final oomph or not might not be crucial. In other cases, the final aspect determines whether the entire effort was successful. Imagine flying troops to a foreign land, they rope down from a helicopter hovering over a bad guys domicile, they break down the door of the property, they rush into the place, but then the evil doer manages to escape out a hidden passageway. After all of that effort, after all that likely preparation, and yet in the “last mile” things went awry and so the entire mission is for not.

The word “mile” in this context is not to be taken literally as a distance indicator. Instead, it is to be considered a metaphor for whatever is the last bit of something that needs to be done. You are at work, you are putting together an important presentation for the head of the company. You slave away for days to put together the presentation. On the day of the presentation, you get dressed-up in your finest work clothes, and you rehearse the presentation over and over.  Finally, the meeting time arrives, you go to the conference room to make your presentation.  The company head is there, waiting impatiently to see and hear your presentation.  You load-up your presentation and connect to the screen. But, it turns out, the screen won’t work. You are stuck. You try to wave your hands in the air and pretend that the presentation is being shown, but that “last mile” undermined you.  Hope this story didn’t give you a nightmare.

Anyway, there is a “last mile” that we are facing in the self-driving cars realm. If not figured out, this last piece of the puzzle will prevent self-driving cars from achieving true self-driving car capabilities. Right now, self-driving cars are not true self-driving cars in the sense that we don’t yet have a Level 5 self-driving car (see my column on the Richter scale for self-driving cars). We aren’t going to ultimately have Level 5 self-driving cars if we don’t solve the “last mile” aspects.

At the Cybernetics Self-Driving Car Institute, we are specifically focusing on the “last mile” of software needed to ultimately arrive at true self-driving cars.  We are doing this by concentrating on the “edge problems” that few others are currently thinking about.

What is an “edge problem” you might ask? In computer science, we often carve up a problem into its core and then identify other portions that we claim to be at the edge of the problem. This is a classic divide-and-conquer approach to solving problems. You tackle what you believe to be the most essential aspect of the problem and delay dealing with the other parts. Often, this is done because the so-called edges of the problem are vexing. They are extremely difficult to solve and you don’t want to postpone making progress by inadvertently trying to tackle the hardest part of the overall problem.

Indeed, today’s self-driving car makers are primarily dealing with what most would perceive as the core of the driving task. This entails having a car be able to drive along a road. You can use relatively straightforward and at times simplistic methods to have a car drive down a road. For a highway, you have the sensors detect the lane markings of the highway.  By finding those, you now have identified a lane into which you can have the car drive. There’s a striped line the right of the car, and another striped line to the left of the car, both of which provide a kind of virtual rails like for a train into which you just need to keep the car confined. You next use the sensors to detect a car ahead of you. You then have your car follow that car that’s ahead, and play a kind of pied piper game. As that car ahead speeds-up, you speed-up. If it slows down, you slow down.  See my column on the Pied Piper approach to self-driving cars.

Many novice human drivers such as teenagers who are learning to drive will use this same approach to driving. They watch what other cars do, and even if the teenager doesn’t comprehend all the other aspects surrounding the driving task (road signs are overwhelming, watching for pedestrians is distracting, etc.), they can at least do the simplistic follow-the-leader tactic. You’ve probably noticed that most of the existing self-driving cars act in the same way. You as the human must first navigate the car into a situation that allows for this simplistic approach to be used. For example, you drive your car from your house onto the local freeway, and once you are safely on the freeway, you then engage the self-driving car capability.  It is about the same as using cruise control, which we’ve had for many years. You as the human do the “hard” work of getting the car into a circumstance whereby the myopic piece of AI automation can then perform its task.

The amount of intelligence embodied in today’s self-driving cars is quite shallow. Shallow is a way we use to describe whether an AI system is robust or whether it is brittle. A shallow AI system is only able to do a particular task, and once you start to go outside a rather confining scope, the AI system is not longer able to cope with the situation. Today’s self-driving cars demand that a human driver be instantly ready to intervene for the AI of the self-driving car, once it gets itself at the bounds of what it can do. These bounds aren’t impressive and so the human must be ready at all times to intervene.  Only if you have a pristine driving situation is the AI able to proceed without human intervention.

Thus, if you are on a freeway or highway, and if it is a nice sunny day, and if the traffic is clearly apparent around you, and if the surface of the road is normal, and if there aren’t any other kinds of nuances or extraordinary aspects, the self-driving car can kind of drive the car.  Toss even the slightest exception into the mix, and the self-driving car is going to ask you as the human driver to intervene.  This ability of the AI is good enough perhaps for a Level 2 or maybe a Level 3 self-driving car, but we aren’t going to get readily to a safe Level 4 and certainly not at all to a true Level 5 if we continue down this path of assuming a straightforward driving environment.

I am not saying that we shouldn’t be proud of what self-driving cars are now able to undertake. Pushing forward on self-driving car technology is essential toward making progress, even if done incrementally. As I have previously stated in my columns, Google’s approach of aiming for the Level 5 self-driving car has been laudable, and while we’ve seen Tesla aim at the lower levels of self-driving cars, we need someone coming along at the lower levels to gain acceptance for self-driving cars and spur momentum toward the Level 5.  Google has concentrated on experiments, while Tesla has concentrated on day-to-day driving.  Both approaches are crucial. We need the practical, everyday experience that the Tesla and other existing self-driving cars are providing, but we also need the moonshot approaches such as that of Google (though, as I’ve mentioned, Google too has now shifted toward the let’s-get-something onto today’s roads too).

I’ve been writing in my columns about the various edge problems that confront the self-driving car marketplace. These edge problems are the “last mile” that will determine our ability to reach Level 4 and Level 5. Solving these edge problems also aids the Level 2 and Level 3 self-driving cars, but they are in a sense merely considered “handy” for Level 2 and Level 3 (helpful to improving the self-driving car at those levels), while they are a key necessity for Level 4 and Level 5.

Here’s a brief indication of the AI software components that we are developing at the Cybernetics Self-Driving Car Institute, of which I’ll reveal just those aspects that I’ve already covered in prior columns (there are other additional “stealth” related efforts underway too that I won’t be mentioning herein; sorry, can’t give out all of our secrets!):

Pedestrian behavior prediction. Today’s self-driving cars are crudely able to detect that a pedestrian is standing in the middle of the street, and so the AI core then will try to come to stop or take an avoidance action to keep from running into the pedestrian. But, there is almost no capability today of trying to in-advance predict pedestrian behavior. If a pedestrian is on the sidewalk and running toward the street, this is essentially undetected today (it is not something that the AI system has been programmed to be concerned about). Only once the self-driving car and the pedestrian are in imminent danger of colliding, and in an obvious manner, does the AI core realize that something is amiss. Unfortunately, this lack of in-advance anticipation leads to circumstances whereby there is little viable way to safely deal with the pending accident. Our AI component, in contrast, can help predict that the pedestrian is going to enter into the street and potentially create an accident with dire consequences, and thus provide greater opportunity for the self-driving car to take avoidance actions. By solving this edge problem aspect, it will greatly improve self-driving cars at all levels of the SAE scale, it will reduce the chances of accidents involving pedestrians, and will be needed definitely to achieve the true Level 5.  Besides predicting pedestrian behavior, we are also including the predictive behavior of actions by bicyclists and motorcyclists.

“Avoiding Pedestrian Roadkill by Self-Driving Cars,” by Dr. Lance B. Eliot.

Roundabouts.  It’s admittedly not every day that you encounter a roundabout, also known as a traffic circle or a rotary. But, when you do encounter it, you need to know how to navigate it. Self-driving cars treat this as an edge problem, something that they aren’t worried about right now as it is an exception rather than a common driving situation. We are developing AI software to plug into a self-driving car’s core AI and provide a capability to properly and safely traverse a roundabout.

“Solving the Roundabout Traversal Problem for Self-Driving Cars,” by Dr. Lance Eliot.

Conspicuity. When humans are driving a car, they use the car in conspicuous ways to try and warn other drivers and pedestrians. This includes using the headlights, the horn, various special maneuvers, etc.  Self-driving car makers aren’t yet incorporating these acts of conspicuousness into their AI, since it is not considered core to today’s driving tasks. But, in order to achieve Level 4 and Level 5, these are edge problem aspects will be a differentiator for self-driving car makers.

“Conspicuity for Self-Driving Cars: An Overlooked but Crucial Capability,” by Dr. Lance B. Eliot.

Accident scene traversal. When a self-driving car today happens to come upon an accident scene, the AI hands over the driving of the car to the human driver since its core doesn’t know what to do. This is because an accident scene has numerous driving exceptions that the AI is not yet able to cope with, it’s an edge problem. Our software allows for the AI to invoke specialized routines that know what accident scene consists of and how to have the self-driving car can safely make its way through or around it.

“Accident Scene Traversal by Self-Driving Cars,” by Dr. Lance B. Eliot.

Emergency vehicle awareness. As humans, we are aware that we need to be listening for sirens that indicate an emergency vehicle is near our car and we then need to pull over and let the emergency vehicle proceed. Or, maybe we see the flashing lights of the emergency vehicle and must make decisions about whether to speed-up or slow down, or take other evasive actions with our cars. This is not considered a core problem by the existing AI systems for self-driving cars, instead it’s considered an exception or edge problem. We are developing AI software that provides this specialized capability.

“Emergency Vehicle Awareness for Self-Driving Cars,” by Dr. Lance Eliot.

Left turns advanced-capabilities. The left turn is notorious for being considered a dangerous act when driving a car. Though self-driving cars have a traditional left-turn capability at their core, whenever a particular thorny or difficult left-turn arises the self-driving car tends to hand the controls back to the human driver. These are considered edge problem left-turns. This is a risky gambit though to hand things over to the human, as the human driver is thrust into a dicey left-turn situation at the last moment, plus, this handing over of control to a human is not allowed at Level 5. We are developing AI software that handles these worst case scenario left-turns.

“Are Left Turns Right for Self-Driving Cars: AI Insights,” by Dr. Lance B. Eliot.

Self-driving in midst of human driven cars. Most of the self-driving cars today assume that other cars will be driven by humans that are willing to play nicely with the self-driving car. If a human decides to suddenly swerve toward the self-driving car, the simplistic core aspects don’t know what to do, other than make pre-programmed radical maneuvers or hand the controls over to the human driver. We are developing AI software that comprehends the ways that human drivers drive and assumes that a true self-driving car has to know how to contend with human driving foibles.

“Ten Human-Driving Foibles and Self-Driving Car Deep Learning Counter-Tactics,” by Dr. Lance B. Eliot.

Roadway debris reactions.  Self-driving cars are able to use their sensors to detect roadway debris, but the AI core right now tends to not know what to do once the debris is detected. Deciding whether to try and roll over the debris or swerve to avoid it, this is considered currently an edge problem. Usually, the self-driving car just hands control to the human driver, but this can be a troubling moment for the human driver to react to, and also there isn’t going to be a human driver available in Level 5. We are developing AI software that aids in detecting debris and providing self-driving car tactics to best deal with the debris.

“Roadway Debris Cognition for Self-Driving Cars,” by Dr. Lance B. Eliot.

Road sign interpretation.  Self-driving cars are often not scanning for and interpreting road signs. They tend to rely upon their GPS to figure out aspects such as speed limits and the like, rather than searching for and interpreting road signs. This though is a crucial edge problem in that there are road signs that we encounter that are paramount to the driving task and cannot be found anywhere else other than as the car drives past the road sign. For example, when roadwork is being done and temporary road signs have been placed out to warn about the driving conditions. We are developing AI software that does this road sign detection and interpretation.

“Making AI Sense of Road Signs,” by Dr. Lance B. Eliot.

Human behavior assessment. Today’s self-driving cars do not have at their AI core the ability to look at other humans and detect their motions and indications about the driving task. For example, suppose you come up to an intersection that the lights are out, and so a traffic officer is directing traffic. This is considered an edge problem by today’s AI developers. If you are in a self-driving car and it encounters this situation, and even if it can figure out what is going on, it will just hand over control of driving to you. This can be dangerous as a hand-off issue, and also it is not allowed for a Level 5. We are developing software that can detect and interpret the actions of humans that are part of the ecosystem of the self-driving car.

“The Head Nod Problem in AI for Self-Driving Cars,” by Dr. Lance B. Eliot.

As mentioned, these kinds of edge problems are seen by many of the existing car makers as not currently crucial to the AI core of the self-driving task. This makes sense if you are merely wanting to get a self-driving car to drive along a normal road in normal conditions, and if you assume that you will always have an attentive human driver ready to be handed the controls of the car.  These baby steps of limited scope for the AI core toward a self-driving car are going to though backfire when we get more self-driving cars on the roadways, and the human drivers in those self-driving cars become less attentive and less aware of what is expected of them in the mix between the AI driving the car and their driving of the car.

Furthermore, solving these “edge problems” is essential if we are to achieve Level 5. By the way, these edge problems involve driving situations that we encounter every day, and are not some farfetched aspects. Sometimes, an edge problem in computer science is a circumstance that only happens once in a blue moon. Those kinds of infrequent edge problems can at times be delayed in solving, since we assume that there’s a one in a million chance of ever encountering that particular problem.

For the edge problems I’ve identified here, any of these above driving situations can occur at any time, on any roadway, in any locale, and under normal driving conditions. These are not unusual or rare circumstances. They involve driving tasks that we take for granted. Novice drivers are at first not familiar with these situations and so over time tend to learn how to cope with them. We are using AI techniques along with machine learning to push self-driving cars up the learning curve and be able to properly handle these edge problems.

We are also exploring the more extraordinary circumstances, involving extraordinary driving conditions that only happen rarely, but first it makes most sense to focus on the everyday driving that we would want self-driving cars to be able to handle, regardless of their SAE level of proficiency.  Some of the development team liken this notion to the ways in which we’ve seen Operating Systems evolve over time. A core Operating System such as Microsoft Windows at first didn’t provide a capability to detect and deal with computer viruses, and so there were separate components that arose for that purpose. Gradually, those computer anti-virus capabilities were eventually incorporated into the core part of the Operating System.

We envision that our AI edge problem components will likewise be at first add-ons to the core of the AI self-driving car, and then, over time, the core will come to integrate these capabilities directly into the AI mainstay for the self-driving cars. We’ll meanwhile be a step ahead, continually pushing at the boundaries and providing added new features that will improve self-driving car AI capabilities.

Sometimes, edge problems are also referred to as “wicked problems” and this is due to the aspect that they are very hard to solve and seemingly intractable. These are exactly the kinds of problems we enjoy solving at the Cybernetic Self-Driving Car Institute. By solving wicked problems, aka edge problems, we can ensure that self-driving cars are safer, more adept at the driving task, and will ultimately reach the vaunted true Level 5. We encourage you to do likewise and help us all solve these thorny edge problems of the self-driving car task. Drive safely out there.

This content is original to AI Trends.