Robojacking of Self-Driving Cars: Prevention by Better AI

235

By Dr. Lance B. Eliot, the AI Trends Insider

Carjacking was a new term added to our vocabulary back in the early 1990s. The original case that led to the terminology involved the murder of a female drugstore cashier in Detroit that had refused to give over her car to a hijacker and he shot her dead for her refusal. A newspaper picked up the story and called it a carjacking.

For the next several years, we seemed to have somehow gotten into a national frenzy in the United States of having carjackings. Some estimates indicate that we had about 40,000 carjackings a year, lasting throughout the 1990s. It was so prominent and prevalent that Congress even proposed and passed the Federal Anti-Car Theft Act of 1992, known as FACTA. It became a federal crime to carjack and allowed for a punishment of fifteen years to life for using a firearm to steal a car from someone.

During the height of the evil fad, I remember vividly something that happened to an executive at a major firm in downtown Los Angeles that I knew well. I was doing consulting work for his firm. We had been both working together late one night, and I finally told him that I’d have to head home to see my kids before they finished up the night and they went to bed (I always strived to get home and tuck the kids in, and usually tell a bedtime story, about self-driving cars, naturally). He said he’d be leaving shortly after I left. The company was a well-known company and located in a good part of downtown.

I saw him the next day at work, and he had an incredible story to tell. When he had finally left work, he drove off the gate guarded parking lot and drove directly to the freeway. Once on the freeway, he realized that his car was very low on gasoline. He figured that if he saw a gas station right next to the freeway, it would be safe to exit the freeway, drive immediately to the gas station, get some gas, and then get back onto the freeway and be on his way.  The whole act should take maybe three to four minutes maximum. He saw a lit-up gas station billboard and it looked like the gas station was seemingly safe, and so he exited the freeway to get his gas.

He pulled up to the gas pumps. There was no one else getting gas. It was late at night. It was not the best part of Los Angeles, but he figured he was right near the freeway and things seemed very quiet at the gas station.  As he started to pump gas into his car, which was an expensive foreign model car, another car pulled into the gas station. It pulled up next to where he was standing. Out jumps a guy with a gun. The gunman pointed the gun at him and demanded his wallet. He handed over his wallet, figuring it wasn’t worth the chance of dying just to keep his wallet.

The gunman then asked for the keys to the car. He handed over the keys. The gunman got into the car and drove off. My colleague and friend stood there in the gas station, shocked. In less than one minute, he had his life threatened, he’d given up his wallet with all of his private info in it, and he had his car stolen. So much for the idea that by doing a quick gas station stop in just three to four minutes that nothing bad could happen. He was lucky to be alive.

In case you are curious about what happened next, here’s the next twist to the story. He called his wife right away to tell her what had happened. Just after doing so, the police called his wife. Turns out that the gunman had driven a few blocks away and ran through a stop sign. A police car was there and the police stopped the vehicle. They got the gunman out of the car, found the wallet, realized that the car wasn’t owned by the guy driving it, and so they used the car registration info to give a call to my colleague’s home. His wife answered the phone. Imagine if he had not already called her. She would have gotten a call from the police, they would have told her they have a man with a gun that has her husband’s wallet and his car. She would have likely assumed he was dead in a ditch somewhere.

Anyway, carjackings still happen and they occur throughout the world. Some countries are faced with lots of carjackings. Some of the carjackings are random in nature, wherein the evil doer just seizes upon a particular random opportunity and takes a car from someone by force. In other cases, it is a planned activity. For example, it is something that mobsters or kidnappers will plan carefully as to when the mark will be in their car and how to best take them and their car.

What does this have to do with self-driving cars?

Answer, there are predictions that self-driving cars will also be attacked by carjackers. In fact, the world has gained a new term for this, it is called robojacking.

The term has not yet made the big time of formally being adopted by the top standard dictionaries, and you’d more likely find it in an urban dictionary. Most are using the word robojacking to refer to hijacking a self-driving car, but others say it is more broadly a term for hijacking anything that is a smart autonomous device, even including an autonomous vacuum cleaner. For purposes herein, let’s use robojacking for denoting hijacking of a self-driving car.

The “robo” part of robojacking is intended to suggest that the self-driving car is being driven by a robot. We know that this is not the way that self-driving cars are going, in the sense that there won’t be an actual robot that sits in the driver’s seat of the car. Instead, the AI of the self-driving car is embedded within the car and its many computer processors. We’ll be lenient and use the word “robo” to imply that the self-driving car is not being human driven and instead being driven by a robotic like system, the AI of the self-driving car.

Is robojacking the same as carjacking? In many ways, it is. The same situations that involve getting carjacked can happen likewise for robojacking. We’ll take a look at the standard ways in which police advise you to avoid getting carjacked, and see if they apply to robojacking.

  •       Try not to drive in bad areas where crime is rampant

This equally is good advice for anyone in a self-driving car as it would be for a human driven car.

I suppose the main difference will be that when a human is driving a car, and even if they are using a GPS, they are usually nonetheless paying attention to the surrounding area of where they are driving.

When in a self-driving car, you might become so focused on doing things inside the self-driving car as an occupant, and since there is no human driver (at least for the Level 5 true self-driving car, see my column on the Richter scale for self-driving cars), you might not be aware of where the car is going per se during the journey to your destination.

This can be somewhat readily resolved by having the self-driving car route around or away from bad areas. Also, the self-driving car could be using its own capabilities to detect the surroundings and if it begins to detect graffiti and other signals or signs that the area is bad, it could automatically reroute.

At the Cybernetics Self-Driving Car Institute, we are exploring these kinds of augmented capabilities for the AI of the car, in order to make it more aware of the driving journey and minimize the chances of a robojacking.  As you’ll see next, there are many added ways to do this.

  •       When stopped in traffic, keep some distance between your vehicle and the vehicle ahead of you.

This advice is based on the notion that if someone tries to come up to your car and point a gun at you, you could maneuver the car out of traffic and try to drive away. Of course, if the gunman wants to shoot you, this could lead to getting shot right away. It’s a trade-off, but anyway, it seems like generally good advice that you would want to have avenues of escape.

This is readily adapted for self-driving cars. If an occupant is worried about being robojacked, the self-driving car could be on alert such that the AI makes sure to be calculating ways to keep from having the self-driving car get pinned. Furthermore, the AI could be continuously identifying ways to escape and get the self-driving car out of a bad situation.

How would the self-driving car know that it is time to escape? This can involve the occupant talking with the self-driving car. I’ve covered this in my column on in-car commands for self-driving cars.

  •       Park your car in well-lighted areas that seem safe

For human drivers, it’s certainly good advice to always try to park in a well-lit area and do so in as safe a spot as you can find. I knew one driver that always would park his car under a street light, hoping that the light would discourage anyone from either harming his car or stealing it. I am not sure that it made a difference, but one thing that did happen was kind of ironic. One day, he came out to his beloved car, and someone had tossed a rock at the street light, shattering all the glass and it came down directly onto the hood of his car. As I say, somewhat ironic that his method of trying to be safe had led to this.

For self-driving cars, I’ve already discussed in several of my columns about improvements as to how self-driving cars will get better and better at parking a car. One aspect that most of the self-driving car makers are not considering involves the choice of where to park a car. In other words, they leave it to the human occupant to tell the self-driving car where to park, and then the self-driving car dutifully parks in that spot.

I’ve already been saying that we’ve got to make self-driving cars smarter in that they should also be able to identify where to park. No need to burden the occupant. That being said, this does not preclude allowing the occupant to indicate where they want to park. It is just as though you had a chauffeur that was driving your car for you. You might normally allow the chauffeur to figure out where to park the car, but other times you might indicate to the chauffeur a specific spot that you desire.

A self-driving car could use the clues of the surrounding to try and gauge how safe the parking location is. The lighting can be detected by the camera of the self-driving car. Are there other equally the same cars as your car that are parked there? The image analysis capabilities of the AI can try to ascertain that aspect. We often look to see if other cars similar to our car are parked someplace, figuring that it must be likely “safe” if other cars of the same kind are parked there. And so on.

  •       Keep the car windows up and the doors locked

This piece of conventional advice is a somewhat broad generalization. Yes, I think we all now agree that you should keep your car doors locked at all times. We’ve gotten used to this as a common feature of any modern car. The part about keeping the windows up is though the over-the-top part. Are we to never allow our windows to be down?

One approach would be that if the car is in motion over a certain speed, say over 5 miles per hour, maybe then the windows can be down. Once the car gets into lower speeds, maybe have the windows automatically roll-up.

This could also be varied by the surroundings. If you are driving in Beverly Hills, maybe the windows can be down. If you are driving in a rougher part of town, perhaps the AI will automatically roll-up the windows. This is something that the AI would likely want to mention to the occupants, so they don’t get into a war with the AI about rolling down and rolling up the windows.

  •       If you are confronted and have no viable recourse, get out and give up the car

This piece of advice is one that makes pretty good sense. Is it worth it to die for keeping possession of your car? Probably not. If there is no other evasive maneuver to prevent the carjacking or robojacking, maybe just give up the car and get away as quickly as you can.

For a human driven car, it means that the carjacker can just jump into the driver’s seat, and assuming the keys are in the car, they can drive away the car.

For self-driving cars, we’re going to have some interesting twists. First, as I’ve mentioned in my column on in-car commands, we don’t yet know how one will direct a self-driving car to drive. If a self-driving car is only responsive to say your face due to facial recognition or your voice due to voice recognition, the self-driving car is not going to go anywhere, unless the robojacker keeps you in the self-driving car.

This is bad, of course, because it suggests that robojackers will be more determined to keep the occupant rather than let them go. In essence, if the robojacker cannot drive away your car, as they can now, they will have more incentive to keep you as a hijacked victim.

For self-driving cars, we could have a capability wherein the occupant that does have the right to tell the self-driving car what to do might also have the ability to transfer that designation or to make it an open designation. Thus, it might be that the occupant could tell the self-driving car to take commands from someone else, and therefore the robojacker does not need necessarily to keep the occupant kidnapped.

Let’s suppose that we add such a feature to self-driving cars. There are some other twists involved. By-and-large, self-driving cars are going to be connected with some main servers that are used by a car maker to communicate with the self-driving car. This is done to share the experiences of other self-driving cars with your self-driving car, improving the capability of your self-driving car in a collective manner.

Anyway, presumably, if you got out of your self-driving car, and handed it over to a robojacker, and they drove off, you could simply and easily contact the car maker (or whomever is connected with your self-driving car), and they could then track where the self-driving car goes. They could then alert the police to go pick-up that evil doer. Some even speculate that we might have the self-driving car be told by the server to drive directly to the nearest police station. This would hand-over the evil doer to the police without any kind of police chase.

Is the public ready for this Big Brother kind of capability? Will car makers resist allowing the government to be able to access info about the self-driving cars they have on the roads? Will they allow the government to be able to control those self-driving cars remotely? We are all still today just trying to figure out whether we want the government to look at our Facebook pages or be able to crack into our encrypted iPhones. This whole aspect about self-driving cars and privacy is going to be another substantial can of worms. Mark my words!

Now, so far, I’ve covered the usual kinds of advice about avoiding carjacking, and compared it to robojacking. There are some that express concerns of another kind related to robojacking.

Here’s one that could become a favorite evil doer practice:

  •       Stand in front of a self-driving car to make it come to a halt, then robojack it

Currently, most of the self-driving cars are very timid when it comes to taking action after detecting a pedestrian in front of the car. The self-driving car will come to a halt and do whatever it can to avoid hitting the pedestrian. There is an ongoing ethics debate about how self-driving cars should be programmed in terms of the potential for harming humans (see my column on the ethics of self-driving cars).

Suppose you are robojacker. Suppose you know that self-driving cars won’t harm a pedestrian. You therefore walk straight up to a self-driving car and stand in front of it. You know it won’t try to run you over. If it was a human driver, you wouldn’t be so sure. You would realize that a human driver might just floor the gas and try to run you over. It becomes a game of chicken to see who blinks first. Does the gunman try to shoot the human driver, or does the human driver hope they can run over the gunman before they can get off a shot.

With the self-driving car, there is presumably no longer a game of chicken. The robojacker knows that the self-driving car won’t try to run them over. They could have an accomplice that stands in front of the self-driving car, which comes to screeching halt because the AI has been programmed to do so, and let’s say the gunman is maybe standing just aside of the self-driving car, gun ready and aimed at the occupants. This seems like a pretty easy way to robojack.

The question then arises as to whether we want our self-driving cars to be so timid. Suppose we allow our self-driving cars to be more aggressive. If it believes that a robojacking is taking place, maybe it indeed proceeds as though it is intending to run over the pedestrian. How does it know though that this is robojacking and maybe not just someone kidding around? Can the AI be good enough to differentiate those situations that are dire from those that are not? We could have the human occupant offer a verbal command to the self-driving car, whereby they tell the self-driving car whether it is a robojacking or not. If they say it is, then perhaps the self-driving car gets aggressive.  But suppose now the occupant is kidding and didn’t really mean to say that it is a robojacking. I think you can see the conundrum here.

Of course, even human drivers are apt to do the same thing that the AI would do. We don’t know that all humans would try to run over a gunman. Some might, some might not.

Similarly, another style of robojacking might be when a car in front of the self-driving car comes to a halt to pin in the self-driving car, with another accomplice in car behind the self-driving car. This can happen with human driven cars and so there is not really much difference, other than again the effort to try and escape, which as mentioned we could have the AI try to do.

Some are worried that with self-driving cars that the occupants are going to be sitting ducks, as though being in a self-driving car makes them be this way. I don’t think this makes much sense. They are no more sitting ducks than they would be with a human driven car. We can program the AI to be as much loose and “dangerous” of a driver as a human driver, if we want to do so. Indeed, we’ve been programming our AI to be able to play chicken with someone standing in front of a car. It is readily doable.

Right now, self-driving car makers are being extremely cautious and making the self-driving cars as timid as possible. I’ve already mentioned that pedestrians can play games with self-driving cars by approaching a self-driving car to get it to stop.  This is really the nature of today’s primitive approaches, and I seriously doubt we’re going to keep that same approach as self-driving cars get smarter and more prevalent.

Is robojacking something to be afraid of? Are we going to see a rampant new era of robojackings that will rival the 1990s era of carjackings? I don’t think we will.

That being said, I’d like to add that if we do nothing to prepare self-driving cars for robojackings, yes, we’d be setting ourselves up for this unfortunate wave of crime. On the other hand, I’d hope that as AI developers and self-driving car makers we will be wise enough to anticipate robojackings and try to reduce the chances of it becoming a thing. Let’s all strive for that.

This content is original to AI Trends.