Brainjacking for Self-Driving Cars: Mind Over Matter

230

By Dr. Lance B. Eliot, the AI Trends Insider
Be ready to have you mind hijacked by what I am about to tell you.

Let’s not get ahead of ourselves, though, and so we should start the story at the beginning.

For the future of self-driving cars, we want to ultimately have “true” self-driving cars. By this use of the word “true,” I am referring to the notion that a true self-driving car is one that can entirely drive the car by itself, and does not need any human intervention during the driving task. This is considered a Level 5 self-driving car (see my column on the Richter scale for self-driving cars).

The AI of the car is able to deal with any driving situation, and be able to respond in the same manner that a human driver could respond. There is no need to have a human driver in the self-driving car, and instead all the humans in the self-driving car are merely considered as occupants or passengers. Even if there is a human driver that so happens to be in the self-driving car, their ability to drive is unrelated to the driving of that self-driving car.

The levels 1 through 4 of a self-driving car require that a human driver be ready to intervene in the driving task. The human driver must be present in the car whenever the self-driving component is active and driving the car. The human driver must be ready to take over control of the car, doing so perhaps even at a moment’s notice (see my column on the human factors for self-driving cars). The human driver cannot allow themselves to become overly distracted and distant from the driving task. If the human driver does become severely distracted, they endanger the safety of themselves, and endanger the safety of the other occupants, and they endanger the safety of anyone around the self-driving car, such as other cars and their drivers, pedestrians, and the like.

What do I mean by saying that a human driver might become severely distracted? I suppose I am using too strong a word. If the self-driving car is going on the highway at 80 miles per hour, and if the human driver that is supposed to be ready to take over the controls is watching a video or reading the newspaper, it is chancy that they will be able to properly and rapidly take over control of the car, if needed to do so. They would need to first become aware that there is a need to take over control of the self-driving car. This might happen by the self-driving car alerting them, such as flashing a light at the human driver or making a chime or tone.

But, the human driver cannot assume that the self-driving car will be wise enough to warn them about taking over control. It could be that the self-driving car is getting itself into a dire situation and the AI does not even realize it. Therefore, the human driver should be paying enough attention to know that they might need to take over control of the car. This can also occur if the AI itself or some part of the self-driving car automation is failing. If a key sensor fails, the self-driving car might not be a viable driver any longer. Whether it warns the human driver or not, if the human driver suspects that the automation is faltering, they are expected to take over control of the car.

The Level 5 self-driving car is instead completely different. It assumes that no human driver is needed, ever. For this assumption, I have referred to a Level 5 car as a moonshot. To be able to develop automation that can fully drive a car, in all situations, and act as a human being acts, showcasing human intelligence that is required to fully drive a car, I assure you this is a huge stretch goal. Indeed, Apple CEO Tim Cook have rightfully referred to this as the mother of all AI projects (see my column on this topic)[to be published 8/1]. The cognitive capabilities needed to handle a car, in all situations, and not rely upon a human, this is something of an incredible feat and one that if we can pull off means that AI will be able to do lots of other nifty things.

The gap between a Level 4 self-driving car and a Level 5 self-driving car is considered by some to be minor. I consider the gap to be enormous. It is a gap the size of the Grand Canyon. Getting us from the dependence on a human driver of Level 4, and over into the realm of no human driver of a Level 5, this is a leap of belief and faith that we will get there. Those last “few” aspects that we need to do to get away from relying upon a human driver are not just leftovers. They are the final mile that will take a tremendous amount of effort and breakthroughs to reach.

Can we make that jump from Level 4 to Level 5? Maybe. Maybe not. No one really knows. I realize that there are daily predictions of when we will see a Level 5 self-driving car, but you need to carefully review what those claims are about (see my column on AI fake news about self-driving cars). Most of the pundits making those predictions don’t seem to have laid out, item by item, all the cognitive aspects that would need to be done to fully drive a car with automation. They also don’t seem to have carefully reviewed the sensory capabilities of cars and connected the dots that it is both a mind and body question. The “mind” part of the AI for the self-driving car needs incredible capabilities, and it also needs the sensory “body” aspects to be able to likewise perform the driving task.

Again, I want to emphasize that a Level 5 self-driving car needs to be able to be driven entirely by the automation, and be driven wherever a human driver can drive. This means that the Level 5 self-driving car can’t be limited to driving only on the highways. It must be able to drive on the highways, on city streets, in the suburbs, and so on. Any situation you can think of, that’s where the Level 5 must have proficiency to drive. And, the driving obviously has to be safe. In other words, if you hand me the keys to a Level 5 (alleged) self-driving car, and it turns out that upon using it, the self-driving car crashes because it couldn’t figure out how to navigate a mountain road that a human driver could drive, that’s not a Level 5 self-driving car.

Let’s be clear, you cannot announce that a self-driving car is a Level 5 simply because you wish it so. It needs to walk the talk, so to speak.

Now, suppose that we try and try, but we just cannot develop AI sufficiently to reach a Level 5. Maybe it’s an impossible goal. Or, maybe it is possible, but it will be centuries before we perfect AI sufficiently to be able to do so.

Should we then just settle on Level 4 self-driving cars? Would we say to ourselves, hey, we don’t know when or if ever we can get to Level 5, but at least we got to Level 4?

We could say that. Or, we could try to find another way to get us to Level 5.

Okay, here’s where we jump the shark, so to speak, and think outside the box. Are you ready to think outside the box?

Suppose we augment the AI of the self-driving car with the incredible powers of the human mind, making a connection between the AI of the self-driving car and a human driver in the car. Wait, you say, isn’t this the same thing as needing a human driver in the Level 4 and below levels?

Not exactly. Here’s what I am suggesting. We are proposing to use a Brain-Machine Interface (BMI).

Recent research in neuroprosthetics has been making some good progress recently. This consists of having a means to connect from automation to the human brain, in a relatively direct manner, formally known as BMI.

I realize this seems far-fetched. This is like some kind of science fiction story. Well, others are thinking that we are going to be soon seeing some exponential growth in our ability to create connections to the human mind. BMI is expected to be a huge growth industry with incredible market potential. I wouldn’t count out the possibility.

Sure, today, the BMI connections are very crude. They are limited to simple reading of brain waves or electromagnetic pulses emitted by the brain. We don’t know yet what those readings and pulses really mean in terms of the higher order thinking going on in the brain. Efforts to use the brain and machine connections are currently limited to aspects such as trying to make your mind blank versus trying to fill your mind with thoughts, and the connected device trying to read your mind senses perhaps that you are in one of those two states. This is very primitive, but at least promising.

Can BMI today read your innermost thoughts? No. Will it ever? We don’t know. There are ethical discussions about whether we should let science take us that far. Maybe we should not have devices that could read our true thoughts. Will we lose a sense of personal privacy that we have taken for granted since the beginning of mankind? Will we potentially lose our sense of autonomy and maybe become locked into being connected with a device or automation.

Some are calling this BMI connection “brainjacking” and it is gradually emerging as a handy way to think about this topic. I suppose it is partially a misnomer in that we think of jacking something as a hijack of it. Humans might want to use these BMI devices, voluntarily, and for their own desired purposes. It would not then be a hijacking. It would be something that humans do because they wish to do so. Now, that being said, you might say that a variant would be circumstances whereby someone is being forced into allowing their brain to be hijacked by automation, in which case the term of brainjacking make be more sensible as a meaningful term.

How does this apply to self-driving cars?

I am glad you asked. We could potentially have a BMI device in a Level 4 self-driving car. Most of the time, the self-driving car and the AI are doing just fine. But, when the tough situations arise, namely the circumstances that we want to have a Level 5 self-driving car be able to do, in those instances, the Level 4 taps into the human mind to figure out how to properly handle the situation. We would potentially still consider the automation to be driving the car, since it is taking in the sensory data, it is analyzing it, it is relying on commands to the controls of the car. The human that is connected mentally to the self-driving car is providing that last backstop, that final piece of the puzzle, being able to add cognitive power when needed.

You might wonder what is the difference between this aspect and simply having a human driver in the self-driving car that is ready to intervene. There are some differences.

First, the human driver that is ready to intervene is normally expected to be in the driver’s seat and be ready to take the controls. Physically, they are positioned to take over the control of the vehicle. For the BMI connected human, they can be anywhere in the self-driving car, and they are not physically accessing the controls of the car. They are purely added cognitive power for augmenting the AI. The AI has reached a juncture that it cannot “think its way out of” and needs to have the human to do so.

Second, the BMI connected driver could more readily be interacting with the AI of the self-driving car, in the sense that if the human in the vehicle wanted to alter the course of the car or change some other aspect, they could think it and convey as such to the AI. With the conventional approach of a human driver seated in the driver’s seat, we are so far envisioning that they will be issuing verbal commands to the AI (see my column on in-car commands for self-driving cars). Verbal commands have their limitations due to the speed of being able to speak, the confusion over the use of words and what those words mean, etc.

Third, the human in the self-driving car does not necessarily need to have the physical aspects needed to drive the car, in the case of the BMI. This means that for example someone disabled that otherwise cannot drive a conventional car, could become a co-driver with the AI of the self-driving car. Or, perhaps someone that is elderly and has slowed reaction times and therefore is an unsafe driver if at the wheel. With the BMI approach, the physical aspects of the human don’t particularly matter, since the AI and self-driving car will be doing all the physical aspects of driving.

The aforementioned approach does have its own downsides.

Suppose the BMI human that is connected to the self-driving car has an epileptic seizure. What does the AI do with this? Is the human trying to convey something to the AI about the car, or is it completely unrelated?

Similarly, how can the AI differentiate the thoughts in the head of the human? If I am a human in a self-driving car and have some kind of BMI connection, and I begin to daydream about race cars and going 120 miles per hour, suppose the AI of the self-driving car interprets this to suggest that I want the AI to push the car up to 120 mph and go car racing.

We’d have to have some perfected way of being able to mentally communicate to the AI of the self-driving car. Further, we’d need to have the AI be on its guard to not take commands that seem untoward. This is not just applicable to the BMI situation, but would be true for any circumstance wherein we are allowing a human occupant in the self-driving car to provide commands to the self-driving car. I’ve previously brought up that the human could be suicidal and use in-car verbal commands to tell the car to run over people. This is something we need to prevent, regardless if the commands are issued verbally or by some kind mental reading devices.

Another question arises about whom in the self-driving car can be BMI connected to the AI. So far, for levels 1 through 4 of a self-driving car, the human has to be someone that is able to drive a car. They need to be a licensed driver and qualified and valid to be able to drive a car. Would the BMI connected human need to also fit this definition? Some would say, yes, of course they need to be properly licensed as a driver since they are in a sense co-driving the vehicle. Others would say that no, they don’t need to be licensed per se. If an elderly person that could not get their driver’s license because of physical limitations and yet still has the mental capacity to reason and mentally drive a car, would they not be considered Okay as the co-driver?

The whole topic is raft with questions that involve societal issues. If the self-driving car crashes, who would be liable, the co-driving human or the AI? Would we even be able to discern which one was making the decisions about the driving of the car?  Should the self-driving car maker be the one held responsible, since they provided the BMI capability to begin with?  Will there be some enterprising entrepreneurs that opt to make a BMI device for co-driving of a self-driving car, and without permission from the maker of the self-driving car provide an add-on device that would allow the human BMI connection for co-driving?

Nobody yet knows any of this. We are very early on. But, there is a realistic possibility that we could have BMI connected devices, and we could use them in self-driving cars. For that aspect, I urge you to consider ways in which this can be done. In our lab, we’ve been toying with various crude mechanisms, and are eager to see how BMI evolves and then take this further along for self-driving cars. I suppose you could say that when it comes to driving a car, maybe two heads are better than one.

This content is originally posted on AI Trends.