Ghosts in AI Self-Driving Cars

1221

By Lance Eliot, the AI Trends Insider

Ghosts. Real or not?

There’s a hotel in San Diego, California that supposedly has a ghost that appears from time-to-time and has reportedly been seen by guests as it floats throughout the halls of the hotel. I went there for a computer industry conference and, due to my serving on the organizing board for the event, I got to know the hotel staff. When I asked about the alleged ghost, they were at first reluctant to talk about it. Turns out that it wasn’t something that the hotel management wanted to necessarily promote and it was hoped that public would just stop asking about the ghost.

I eventually found out that the story about the ghost was based on the early history of the hotel. Supposedly, when the hotel was first being built, there was a dispute between two male construction workers about a woman they were both seeing. The story is that one of those construction workers killed the other, doing so inside the hotel prior to its completion, and snuck the body into a dirt hole that had been made for a new swimming pool. The swimming pool was ultimately finished up and the body was now hidden from view.

Years later, the swimming pool began to show some cracks and it was dug up to be resurfaced. Guess what, the remains of a dead body were found! Meanwhile, guests of the hotel had been reporting for years that they would sometimes see an apparition of a male floating throughout the hotel. Wow, ghost sightings, coupled with a story that could explain the basis for the ghost. Perfect. The hotel staff then explained that the construction worker was killed in a specific room of the hotel and that it was reported that guests would sometimes hear eerie sounds in that particular room, plus, that the image of the ghost would often appear there first whenever a sighting happened and then would float throughout the rest of hotel.

One way or another, I had to stay in that room. I asked the hotel staff if I could book my room for the computer industry event to be that ghostly haunted room. They said yes. I told friends and colleagues about it. Some wondered if I would be frightened by the appearance of the ghost, if it so appeared. I told them I would be ecstatic to actually see a ghost. I began to wonder though if my friends and colleagues might try to trick me by purposely arranging for some kind of creepy sounds or lighting to fool me into believing a ghost was there. Anyway, after getting all excited about the prospects of seeing a ghost, I must report to you that with regrettable sadness and disappointment, and after having been in the room for the entire week of the conference, I did not see the ghost. Not even once. Darn!

Does the fact that I did not see the ghost therefore prove that the ghost does not exist? No, of course not. The lack of seeing the ghost doesn’t proof much of anything. For those that insist ghosts exist, presumably they need to showcase that there is a ghost. But, it doesn’t mean that they have to make it appear on demand. For those that insist ghosts don’t exist, the mere absence of a ghost cannot prove that it cannot exist, and only suggests that at that time and place there wasn’t apparently a ghost (assuming even that the ghost would be seen by our normal senses of sight, etc.).

Now, don’t get me wrong and think that I’m agreeing that ghosts do exist. All I can say is that I’ve not yet personally experienced a ghost. Furthermore, other people that claim they’ve seen a ghost, well, it just seems questionable, but anyway, I’m keeping my mind open that maybe there are ghosts, and at the same time maybe there are not ghosts.

Speaking of ghosts, have you ever been using your laptop or desktop computer and it suddenly did something untoward or oddball-like that led you to exclaim that maybe a ghost was in it (or, perhaps similar evil spirits)?

In such a case, I think we might all agree that the word “ghost” is being used in a more real-world way. It’s unlikely that you believe that an actual apparition of a ghostly figure opted to invade your computer and mess with it. Instead, you are suggesting that something mysterious occurred and that it happened without any apparent rhyme or reason. And, lacking a rational explanation for it, you ascribe that it was caused by some inexplicable reason.

Thus, you use the word “ghost” as a convenient placeholder. It succinctly conveys that you don’t know what caused the problem and furthermore usually means that it happens intermittently. If something was happening with apparent and predictable regularity, you’d probably believe there was some systemic reason for it. By the matter happening intermittently, seemingly randomly, it gets tossed into the ghost category.

I’ve managed numerous Help Desk crews during my former years as a computer team manager and many of the Help Desk specialists see themselves as ghostbusters. End-users will often report that some ghost-like issue has occurred in their computer and want the help desk staff to find the ghost and get rid of it. This is more so reported by end-users that often don’t know much about computers. They lack the vocabulary and technical wherewithal to otherwise ferret out the problem, and so it is easy to just say that a ghost has appeared. Some help desk staff even wear ghostbuster shirts or buttons, jokingly taking on the role of ghost finder, ghost remover, and ghost aftermath fixer of technology.

Perhaps you’ve experienced a ghost in your car. Ever had your car suddenly hiccup while the engine is idling at a red light? I’m guessing you’ve had this happen. Eerily, it happens just once, and doesn’t repeat. What happened, you might ponder? What suddenly caused it? There didn’t seem to be any apparent reason for it to occur. And, whatever it is, it only appeared once. Bizarre. That’s likely what goes through your mind. You then also start to worry that suppose it’s something serious. Maybe this is just the first indicator. Suppose it happens again and in a situation that might be direr. Or, should you just shrug it off as a fluke. Maybe it’s not worth worrying about.

Those pesky ghosts!

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and along the way we’ve been coming up with ways to deal with ghosts.

Best to Anticipate Ghosts in AI Self-Driving Cars

Yikes, ghosts in AI self-driving cars? This might seem like a scary possibility. There are some AI self-driving car pundits that keep portraying AI self-driving cars as though they are perfect and will never have any problems. It’s kind of wild that anyone would think this. A self-driving car is still a car. We all know that cars have problems. Parts wear out and things break. We also all know that sometimes a car design or the parts chosen for the car can have flaws, and there are often vehicle recalls related to design elements or parts of the car. Let’s all be realistic and realize that AI self-driving cars are going to have problems.

For my article about the debugging of AI self-driving cars, see: https://aitrends.com/ai-insider/debugging-of-ai-self-driving-cars/

For my article about recalls and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/

For the difficulty of code obfuscation in AI systems, see my article: https://aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

For bugs that can freeze-up an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

Not only will the self-driving car experience physical problems that are manifested by parts wearing out or breaking, but there’s also the software that’s involved in a self-driving car that will have issues too. The AI system is a complex and convoluted set of software components. Invariably, there are going to be bugs in the AI system. We already know that bugs and other software maladies have occurred in a wide range of complex systems such as those designed and developed for spacecraft, for airplanes, and a slew of similar kinds of real-time based systems. There’s no special reason that an AI self-driving car will be an exception and somehow never experience a bug.

I know that some AI self-driving car pundits shake-off the potential of bugs by saying that the use of OTA will deal with it. OTA or Over The Air refers to the self-driving car having an electronic communications capability to connect typically with a cloud based system that provides updates to the self-driving car. Thus, in theory, if a bug is found in the on-board systems of the self-driving car, the auto maker or tech firm can prepare a software fix and push the fix down into the self-driving car’s AI system. This is certainly logical and feasible, but it also falsely suggests that bugs aren’t going to occur (they will), and the OTA is really more about being able to (hopefully) quickly ensure that a fix is put in place and has little to do with eliminating bugs.

For more about OTA and self-driving cars, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

When I refer to bugs, let’s switch the terminology and refer to these anomalies as ghosts.

I’m guessing that we’ll likely have self-driving car owners and occupants that are going to be reporting that there are ghosts in their AI self-driving cars. This is akin to reporting that a ghost is in your laptop or desktop computer. It’s akin to when your conventional car hiccups at the red light and you blame it on a ghost.

You’ll be in an AI self-driving car and something will happen that seems unexpected and mysterious, for which the easiest way to describe it is by saying that it had a ghost. Again, it’s unlikely many people will genuinely believe that an evil spirit was in the AI self-driving car and they are just using the convenience of saying it was a ghost to depict that it happened out-of-the-blue. Ghosts are going to be reportedly detected in AI self-driving cars, I can guarantee it.

Who will be the AI self-driving car ghostbusters?

Presumably, the auto maker or tech firm will have ghostbusters. In addition, the companies that do maintenance and support for AI self-driving cars will likely need ghostbusters. The ghostbusters in the field will at times not necessarily be able to figure out what the ghost is. They will possibly need to confer with the auto maker or tech firm. Likewise, the auto maker or tech firm might not be able to readily figure out the ghost and will rely upon or need to have some ghostbusters in the field to assist.

You might be aware that ghosts have already been encountered in cars that are somewhat like self-driving cars. First, be aware that self-driving cars are categorized by levels of capabilities. The topmost self-driving car is a Level 5. Level 5 self-driving cars can be driven entirely by the AI and without any human driver needed. Indeed, for a Level 5 there is usually no pedals and no steering wheel since the expectation is that there won’t be a human driver and therefore no need to provide such controls. Self-driving cars that are less-than a Level 5 are considered co-sharing of the driving task, doing so with the AI and the human driver. A human driver must be present and ready to drive for a self-driving car less than a Level 5.

For the levels of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For my framework about AI self-driving cars, see my article:  https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For what can happen when AI developers burnout, see my article: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my forensic analysis of the Uber incident, see: https://aitrends.com/selfdrivingcars/initial-forensic-analysis/

For my article about responsibilities and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

Tesla’s cars are considered currently as around a Level 2 or Level 3. As reported in the news, some Tesla owners have from time-to-time claimed that their cars did strange things such as the doors of the car opting to unlock by themselves. This would be disconcerting since if you’ve parked your car someplace and are in say the grocery store, you’d be worried that your car might suddenly unlock itself and perhaps be vulnerable to hoodlums or being stolen. There are reports that allegedly sometimes a car will open the door while the car is in motion. Again, this is problematic and quite dangerous, if true.

For conventional cars, when a ghost occurs, it is almost always about some mechanical or physical item that perhaps is wearing out or defective or is broken. When an AI self-driving car has a ghost, it could be that the software has inadvertently caused a problem to appear. Suppose the AI system gets into a part of the code that has to do with unlocking the car, and due to perhaps the code being improperly written, it opts to unlock the car even though there’s no sound basis to do so at that moment in time. Same could occur with opening the car door.

If the AI developer did not anticipate the car door possibly opening while the car is in motion, they might not have developed the code to try and prevent opening the car door while the car is in motion. It could be that the developer did not write anything that would intentionally open the door while it is in motion, and therefore the developer makes the assumption that it won’t ever happen. But, it could be that something else invokes the car door opening code and violates the assumption that the developer made. Without having anticipated that the car door might open while the car is motion, the developer didn’t even put in place any kind of double-check in the code to prevent this from happening.

Some pundits for AI self-driving cars are of the belief that the AI will be so good that it would somehow magically “know” to not open the car door when the car is in motion. These pundits seem to believe that the AI will have the same kind of “common sense reasoning” that humans have. There is still much debate by the AI community about whether common sense reasoning is needed for AI self-driving cars. I’ll say right now that we’re not going to have common sense reasoning for at least the first generation of true AI self-driving cars, so don’t be betting that this notion of “common sense” will exist in and be watching over your self-driving car for you.

For my article about common sense reasoning and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

For my article about the egocentric concerns about AI development, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For my article about reverse engineering AI self-driving cars, see: https://aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For real-time aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

Finding the Ghost in an AI Self-Driving Car

Where might ghosts surface in an AI self-driving car? Let’s consider the key aspects for an AI self-driving car, which consist of:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

There can be a potential ghost appearance in any of those key areas.

A ghost frequency can be:

  •         Occur just once and never seemingly appear again
  •         Occur more than once and seem to occur randomly
  •         Occur more than once and with a semi-random pattern
  •         Occur more than once with a determinable pattern

We also need to realize that it might be not just one ghost. Often, one anomaly might occur, and it can spark or generate that another ghost might occur. Someone might believe that there’s just one ghost that accounts for all of the ghosting. This though is something that a versed ghostbuster knows to look for. Trying to ascribe mysterious behavior to one ghost might not make sense and it might indeed be more than one that have appeared, and for which they might or might not be related to each other.

Thus, keep these in mind:

  •         There can be just one ghost
  •         There can be more than one ghost
  •         Multiple ghosts might be related to each other
  •         Multiple ghosts might be unrelated to each other

It’s also vital to consider whether the ghost is something relatively benign or whether it can have severe consequences. A ghost that messes with the steering of the AI self-driving car could cause the self-driving car to swerve at the worst of times, perhaps hitting another car or going head-on into a wall. A ghost that causes the headlights to go into high beams, well, certainly annoying, but not a significant safety issue per se (that being said, I’d still want to find the ghost).

From a concern perspective, be looking for:

  •         The ghost has no adverse consequences and is inconsequential
  •         The ghost has mild adverse consequences but not substantively unsafe
  •         The ghost has adverse consequences and is considered mildly unsafe
  •         The ghost has adverse consequences and is considered unsafe

Let’s now walk through an example of a potential ghost in an AI self-driving car and let you be the ghostbuster.

The AI self-driving car is driving along on an open highway and there’s no traffic nearby. There’s a human occupant in the self-driving car. It’s a Level 5 self-driving car and the human is not driving and has no provision to drive the car. The self-driving car is in the slow lane. The self-driving car makes a lane change into the fast lane. It stays in the fast lane for just a brief moment and then changes lanes back into the slow lane.

The human occupant notices the lane change aspects and looks around to see what might have prompted the lane change. There weren’t any other cars nearby and it seems mysterious that the AI suddenly decided to change lanes and change back to the lane it was in. One would assume that after getting into the fast lane, the self-driving car would have stayed there for some length of time, otherwise why make the lane change. The human occupant worries that this might be some kind of glitch and wonders if it might happen again, and if so whether it could lead to something bad occurring due to possibly in the future making a mysterious lane change when other cars are nearby and might get hit or perhaps actually hit those other cars.

You get contacted by the human occupant and are told about this seeming ghost. What do you think?

First, let’s assume that this “actually happened” in that the human occupant didn’t make it up (suppose the human was drunk at the time and just imagined that it occurred). Good news so far is that no one got hurt. It also wasn’t a panic move by the self-driving car, such as if the human had reported that the self-driving car swerved wildly. The human also indicated that this is the only time it has happened and that the human has been in this AI self-driving car many times before. It seems to be a one-time ghost. We’d also want to find out if the human had been in the AI self-driving car in a similar circumstance before, namely that there wasn’t any nearby traffic and the car was zooming along at high speeds.

So, let’s go ahead and try to find the ghost.

With an AI self-driving car, there’s a chance that the sensor data might have been recorded either on-board the car or it might have been shared up into the cloud associated with the AI self-driving car. It would be handy to check and see if there might be a recorded indication of what the self-driving car was doing during the time that the reported ghost occurred. Using various specialized system tools, it might be possible to interrogate the recorded data and see if indeed the lane change happened.

One “reasonable” basis for the lane change and then the quick lance change reversal action could be that the AI had detected something in the roadway such as debris. The human occupant might not have seen the debris, or by the time the human looked around the self-driving car might be past the debris thus the human didn’t notice it. The AI might have wanted to get into the fast lane, made the move, and then got close to some debris up ahead that was now in the range of the sensors, and opted to switch back into the slow lane to avoid the debris.

For coping with debris and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/

For explanations by AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

For aspects about edge problems, see my article: https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

I’ve mentioned many times that once AI self-driving cars become prevalent, the odds are that the human occupants are going to want the AI to explain why it is doing things. In this case of the lane change, it would have been helpful if the AI had let the human know that it was making a lane change reversal due to the debris. This would have reassured the human occupant about what otherwise would seem to be a mystery move. Few of the auto makers and tech firms are working on this kind of explanation capability. Some argue that it would only confuse human occupants and possibly even scare the human occupants. My view is that why not let the human occupants decide whether to turn on or not the explanation capability.

There’s another handy reason for the explanation capability, namely that it would provide a kind of audit trail of the behavior of the AI. This could be very handy in situations such as this one of trying to ferret out why the AI opted to make the lane change maneuvers. Some are worried that the explanation capability might be used in lawsuits involving AI self-driving cars, which, actually could be seen as a positive aspect to aid the resolving of lawsuits rather than considering it a negative only.

From a technical perspective, some say this is a hard thing to do, and so they push it off as an “edge” problem for now and something to be dealt with later on. Admittedly, there are explanation aspects that will be hard to turn into any kind of human understandable logical explanation. Suppose the self-driving car is using artificial neural networks and other machine learning systems. Those systems often are not based on a human-understandable logic per se and are instead mathematically based. Yes, it might be tough, but this doesn’t necessarily mean impossible to do.

In any case, returning to the case of the ghost that caused the unexpected lane change reversal, let’s assume that there was recorded data and upon inspecting the video of the cameras and the radar sensor data, we can ascertain that there was debris in the fast lane and that the self-driving car therefore made a sensible lane change reversal. Happy day, the ghost wasn’t a ghost.

Suppose though that there wasn’t any debris indicated by any of the sensors. This now means we do indeed have a mystery. The next aspect would be to look at what the sensor fusion did. It might have somehow garbled together the sensor data and inadvertently turned on a flag that said debris is in the roadway. This could be an error in the sensor fusion code. If the sensor fusion passed along the flag to the virtual world model, the virtual world would be marked with an indication of debris in the roadway ahead. Then, during the AI action plan updating, the AI planning portion would believe that there was debris up ahead and possibly opted to invoke a lane change. The lane change commands then would be issued by the car controls commands issuance software.

As you now likely realize, there’s a lot of detective work in trying to find the ghost. Given what we’ve been told about the appearance of the ghost, we’d do a step-by-step way walk through of each of the components of the AI self-driving car to try and ascertain what might have produced the odd behavior. What makes this harder than it seems is that whomever is acting as the detective might not have a means to dig into the AI system to figure things out. Or, there might not be much recorded data about what happened and so there is sparse information available to help solve the mystery.

For those of you with finely honed debugging kinds of skills, you presumably recognize these ghostbuster moves. There’s a famous acronym in the computer field known as GIGO, meaning Garbage In Garbage Out. This suggests that if a computer system lets in bad data, it’s output is going to be bad too. A more modern version is GDGI, which is Garbage Doesn’t Get In, meaning that it is best to design and develop computer systems that prevent garbage from getting in altogether.

In terms of ghosts, if the auto maker or tech firm is focused on Ghosts Don’t Get In (GDGI), they are developing their AI systems to try and not only prevent ghosts, but also self-detect ghosts. It is crucial that a true AI self-driving be self-aware to the degree that it can try to detect when it performs oddball behavior. This then can be either self-repaired or at least be reported to the auto maker or tech firm so that they can potentially proactively try to find a remedy, if relevant and needed, and then push it out to the AI self-driving cars before it becomes a prevalent problem.

Ghosts are going to be perceived as being in AI self-driving cars and it’s up to the auto makers and tech firms to be ready for it. Too many ghosts could scare the public about the veracity of AI self-driving cars. And, I wouldn’t blame the public. AI developers have to think about being ghostbusters, doing so when the AI systems are being developed and also for when the AI systems are in the field and ghost finding and ghost removal are needed in the real-world. That’s no apparition.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.