Responsibility and AI Self-Driving Cars

1464

By Lance Eliot, the AI Trends Insider

How do you get people to behave responsibly? That’s a refrain right now being asked and pondered regarding the scooter wars underway in cities such as San Francisco and Santa Monica.

In case you’ve not yet been indoctrinated into the scooter wars, allow me a moment to bring you into the fold.

Companies are springing up like wildflowers that are providing motorized scooter rentals and are using a modern-day approach to do so. Rather than the prospective customer having to go to a storefront to rent a scooter, they instead look for one on a mobile app that shows a GPS map with scooter locations, akin to the Uber mobile app that shows ridesharing car locations, and simply go over to the nearest scooter, rent it with a credit card, it unlocks, and you are good to go. There is an electronic lock on the scooter and whomever last rented it is pretty much allowed to leave it wherever they want to do so (it locks after the renter is done with the scooter). No need to take the scooter back to the rental office. No need to pick it up from a rental office.

Imagine that you are somewhere in Santa Monica and you want to get across town. You could potentially walk it, but that’s old fashioned, it might take a long time, and you’d likely not have much fun along the way. You could use a ridesharing service, but why do so when it’s a nice day out and the distance is just a mile or two, plus the cost might be relatively high for such a short distance. It would be nifty if somehow there was a motorized scooter, in good shape, waiting for you, and close to where you are now standing. Voila, just use the modern-day scooter rental. Bird is probably the most well known such scooter rental right now and it seems to be the darling of the venture capital crowd.

It all seems like a win-win. Good for the customer, good for the rental firm. But I’ve mentioned earlier here that there is a scooter war going on. What could be wrong with the scooter rental equation? Answer, people are doing things they should not be doing. Human behavior, it’s a thing.

First, the human “driving” the scooter is supposed to obey the rules of the road. You must not ride on sidewalks (unless there are allowed exceptions, which are few). You can’t run red lights. You must stop at stop signs. You must obey the usual traffic laws. Guess what, many of the scooter renters are not abiding by the traffic laws. Shocking? Not really.

Second, there are often local helmet laws that require the use of a helmet when riding a motorized scooter. Bird will actually provide you with a free helmet if you sign-up with them (they mail you the helmet). Guess what, many of the scooter renters are not wearing helmets. Shocking? Not really.

Third, scooter “drivers” have been commonly seen cutting off cars, cutting off bicyclists, cutting off pedestrians. They have been known to zip through crowds at outdoor malls and also try to weave in and out of sidewalks teaming with people. Dangerous for all. Completely wrong to do. Shocking? Not really.

These scooters can typically go around 15 miles per hour, which is plenty fast when using it near people and onto sidewalks and other areas containing pedestrians. There has been an uptick in injuries for scooter riders, including cuts, abrasions, and head injuries. For the mere price of about one dollar to unlock the scooter, and then at a cost of about 15 cents per minute, you can have the joy and adventure of your life. You get to work faster. You think you are a race car driver. Meanwhile, those that abuse the scooter capability are apt to make life difficult and dangerous for everyone else.

There are even concerns about where the scooters are being placed while not in use. Some renters just put the scooter wherever they darned well please. Sometimes they place it in front of a store, blocking the entrance and exit of the store. Sometimes they place it in a driveway, blocking vehicles from readily going into and out of the driveway. It’s like discarding trash and not having to worry about putting the trash into a proper receptacle. You reach your desired destination, step off the scooter, indicate you are done renting it via your mobile app, and walk away from the scooter. Why should you care that it now possibly is in the way of others? Shocking? Not really.

The scooter rental firms like Bird have tended to defend their business model by indicating that they do all sorts of things to prevent people from misusing the scooters. When you sign-up for the scooter rental, it tells you to not do the things I’ve just mentioned that people are doing in droves. On the screen, it tells you to obey the traffic laws. It tells you to not use the scooter in ways that can endanger people. It tells you that you need to wear a helmet if the local laws say you do (and some, like Bird, can provide you one, though you need to remember to take it with you). Blah, blah, blah.

Why did I say blah, blah, blah? Because there are now opponents of these scooter rentals that say that it is insufficient to simply notify people that they need to obey the law, and that they need to wear helmets, and so on. It has little teeth. It is an attempt of these firms to provide a product or service that in the end is dangerous, and yet they can pretend that they have warned the renter “drivers” of what they need to do and not do. Not enough, say the opponents. These scooter rental firms are raking in the dough and trying to act like they are innocent and that it is solely up the renter “driver” to take responsibility for their own actions.

What does this have to do with AI self-driving cars, you might be wondering?

At the Cybernetic AI Self-Driving Car Institute, we are asking the same kinds of questions about AI self-driving cars, and trying to find ways to deal with the similar problems being encountered.

Of course, it’s a bit more serious in the sense that if you have human drivers that do not abide by certain kinds of necessary practices with a self-driving car, the end result can be real life-or-death consequences. A mishandled car is a much more dangerous machine and potential killer than is a scooter.

The Case of Tesla’s Marketing and Advertising

Recently, a letter was submitted to the Federal Trade Commission by a consumer watchdog group that asserts that Tesla is doing essentially what I’ve described about the scooter rentals. Tesla is accused of deceptive and misleading statements and actions, all of which are contributing to Tesla drivers potentially becoming endangered, and now there are some actual cases involving Tesla related driving deaths that are claimed to support this notion.

It has been pointed out that the web site touting the Tesla Autopilot indicates that the Tesla car says this rather prominently “Full Self-Driving Hardware on All Cars,” and that there is video that begins with this emboldened message shown on the screen (shown in all caps on the video): “THE PERSON IN THE DRIVER’S SEAT IS ONLY THERE FOR LEGAL REASONS. HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF.”

What impression do you get from those aspects?

Do you believe that the self-driving car can drive the car without human intervention? For those of you that are AI developers and know about AI self-driving cars, you’d for sure be saying that of course you know that the Autopilot doesn’t truly drive the car without any human intervention. But, what about the average consumer? What impression would they get?

It is contended that most people would think that the Tesla Autopilot is a true Level 5 self-driving car, which is the level at which an AI self-driving car is driven by the AI and there is no human assistance needed. Indeed, many auto makers and tech firms that are aiming toward Level 5 self-driving cars are making them so that there aren’t any driver controls in them at all, at least none for humans to use. The notion is that the AI is the driver of the car. There is no human driving.

Tesla’s Autopilot is currently considered somewhere around a Level 2 or Level 3. It absolutely requires that a human driver be present, and that the human driver be ready to take over control of the car. It is not in that sense a true self-driving car (i.e., it is not a Level 5). And yet, some would say that the manner in which Tesla is marketing and promoting the car would tend to mislead consumers into believing that it is a true self-driving car.

Another common complaint about Tesla’s approach is that they have “cleverly” named their AI system to be called Autopilot. What impression do you have of the word “autopilot” in general?

For most people, they tend to think of an airplane autopilot, of which, they assume that it means that an airplane can be flown without the need for a human pilot to intervene. We’ve all seen TV shows and movies wherein the human pilot turns on the autopilot, and then reads the newspaper or maybe even falls asleep. That’s the general notion that people seem to have about the word autopilot. Does then the naming of the Tesla Autopilot also contribute to deceiving people into believing that the AI of the Tesla at this stage of capability is more capable than it really is?

By the way, you might find of interest my analysis of the autopilot notion and self-driving cars, published in July 2017.

Meanwhile, the deadly Tesla driving incidents of May 2016 and March 2018 have been assessed so far as occurring as a result of the human driver failing to do their part in terms of driving the car, and for which it is then been stated by the NTSB in the May 2016 case that the human driver had a “pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations.”

Furthermore, each time that there has been a Tesla incident involving an activated Autopilot, the Tesla company has right away pointed the finger at the human driver.  The human driver was not paying attention, they say. The use of the Autopilot requires an active human driver, they say, and emphasize that the Tesla owner’s manual states this, along with an on-screen message. According to the NHTSA though, “it’s not enough to put it in an owners’ manual and hope that drivers will read it and follow it.”

Tesla claims that the human drivers do know that they are key to the driving task, as based on surveys that Tesla has conducted. Thus, when an accident occurs such as the March 2018 incident, they pointed out that: “The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.”  In other words, it is being implied that the human driver should have known — and was in a sense warned to be in-the-know, and yet the human driver failed to do so, and thus presumably the human driver is then responsible for the incident and not the auto maker.

Some accuse Tesla of wanting to have their cake and eat it too, in that on the one hand it wants to promote its Autopilot as the greatest thing since sliced bread, and yet at the same time try to retreat from any such implication when it comes time to own up when an accident occurs. There are some that suggest it is like in the movie Casablanca, wherein the French Captain Louis Renault pretends to look the other way and tells the police to round-up the usual suspects, when he really knows who done it.

Tesla points out that they intentionally have the steering wheel touch sensor in order to act as a continual reminder to the human driver to remain attentive to the driving task. Some experts have said they should have done more, such as using eye movement detection and face monitoring to be able to be assured that the driver is actually watching the road ahead. It was recently revealed in an account in the Wall Street Journal that Tesla had considered adding such equipment but decided not to do so, which may come back to bite them.

Will the Federal Trade Commission opt to investigate the claim against Tesla of having deceptive and misleading advertising and marketing practices?

No one knows.

If they do open an investigation, it would certainly seem to cast a shadow over Tesla and its Autopilot. Tesla already seems to be facing a number of other issues and this might be another weight on the camel’s back. Or, it could be seen as something minor and more of the usual kind of bravado that many of the tech firms seem to exhibit. Also, there are some that feel any regulation that looks over the shoulder of self-driving car makers is going to stunt the growth and pace of advance for AI self-driving cars, and so there is resistance toward taking action against these makers.

Even if an official investigation is launched, it could be that Tesla might try to settle the matter rather than having to duke it out and have the public become concerned or confused as to what the fuss is all about. Some suggest that Tesla could readily change the name of the Autopilot and try to re-brand it, selecting a name that would perhaps have less of a connotation baggage associated with it. They could also increase the manner in which they warn human drivers about the capabilities of the car. All of this could be done to appease the regulator and yet not cause a dent particularly in the allure of Tesla.

I’ve already warned that for all of the AI self-driving car makers and tech firms, we’re are gradually and inexorably moving toward an era of product liability lawsuits. (See my piece, Looming Cloud Ahead.)  

I’ve also warned about how the auto makers and tech firms are going to market AI self-driving cars, and that it can provide both an impetus for the public to want and accept AI self-driving cars, but it could also backfire if the expectations are not met. (See my column on the Marketing of Self-Driving Cars.)

One way to consider this problem involves an equation with some aspects to the left of the equal sign, and other aspects to the right of the equal sign. On the left, we’ll stack the suggested and overt claims made by an auto maker or tech firm about what the AI self-driving car can do. On the right, we’ll stack what the AI self-driving car can actually do.  Right now, some would say that Tesla has too much on the left, and so the equation is unbalanced in comparison to what is on the right side of the equation.

Presumably, any such auto maker or tech firm needs to reach a balanced equation, and to do so will either need to subtract something from the left to bring it down to the actual capabilities indicated on the right side of the equation, or, they will need to increase the capabilities on the right side to match the inflated expectations on the left side. By far, changing the left side is faster to achieve than changing the right side.

Responsibility and AI Self-Driving Cars

Let’s revisit the scooter wars. Who is responsible for the proper use of the motorized scooters that are being rented? You might say that it is the human “driver” that has rented the scooter, they are the responsible party. This seems to make sense. They rented it, they should use it properly. There are those though that say the scooter rental firm is responsible, they are the responsible party. This seems to make sense. They provided the scooter for rental, and so they should be ensuring that however it is used that it is used in a proper manner.

This can be likened to AI self-driving cars.

Who is responsible for an AI self-driving car and its actions?

You might say that the human driver is the responsible party. Or, you might say that the auto maker is the responsible party. We might also consider that it could be the AI system that’s responsible, or maybe the AI maker, or perhaps an insurance firm that is insuring the self-driving car, or maybe the human occupants (even if not driving the vehicle).

Take a look at Figure 1.

Is it only one of those parties that holds the sole and entire responsibility?

Maybe. You might contend it is solely and completely the human driver that is to be held responsible for the action of the AI self-driving car. Or, maybe you take the position that it is solely and completely the auto maker. The odds are that we’ll likely consider it to be some kind of shared responsibility. It might be a combination of the auto maker and the AI makers that are held responsible. Or, maybe a combination of the auto maker, the AI maker, and the human driver.

See Figure 2.

In the case of the AI self-driving car, we need to consider the level of the self-driving 

car. At the Level 5, the true self-driving car, we presumably can say that it is not the responsibility of the human driver since presumably there will not be a human driver and not even a provision to have a human driver in the self-driving car (all humans will be mere occupants, not drivers).

See Figure 3.

When we’re in the levels less than 5, it’s pretty much the standard definition that the human driver is responsible for the actions of the self-driving car. But, there’s definite wiggle room since the AI could have done something untoward, and thus it raises the question of whether even a human driver that was attentive and ready to take action, whether or not the human can still be held responsible.

I’ve analyzed this in my exploration of the human back-up drivers that are currently being used in self-driving cars that are being tested on our public roadways. Suppose you put a human driver into a self-driving car, you tell them they are ultimately responsible for the actions of the car, but then when an accident is about to happen the AI hands-over the car controls to the human with only a split-second left to go. Can the human be reasonably expected to really take any action? Is it “fair and reasonable” to still say that it was the human driver at fault?

See my analysis of the human back-up drivers situations.

Also see my analysis of the Uber incident in Arizona.

And see my overall framework about AI self-driving cars.

Co-Shared Responsibility

When I was in college, I moved into a house with three other roommates, having come from my own apartment that had just me in it. I at first thought it would be a blast and we’d have nonstop fun, kind of our own version of Animal House. What I discovered pretty quickly was that the food I left in the refrigerator would mysteriously disappear (this never happened in my own apartment!). We had one bathroom for all of us, and invariably it was turned into a pig sty, and nobody took responsibility for how it got that way (just by osmosis, I guess).

Things got so bad that we had a toilet paper roll “battle” that consisted of no one wanting to go to the trouble to put a new toilet roll into the bathroom (it would have meant having the foresight to buy a roll and have it ready, and actually take the old one off and put the new one on). As the toilet paper roll neared the end of its available tissues, we all began taking less and less, in hopes that someone else would get stuck with the roll being essentially empty and then forced into changing the paper roll. We’d end-up with one ply left. Then, one half of that (a household member used just half for the needed purpose). Then, half of that half (it’s amazing and surprising how much a quarter of a ply can do for you). And so on. It’s funny to think about it now, but we were actually serious about it at the time.

The point to the story is that when in a shared responsibility situation, you need to have some pretty clear-cut rules about how the responsibility is to be shared. If you don’t have ground rules, it can be chaos, since the joint responsibility is muddled.

Take a look at Figure 4.

For a conventional car, I think we can pretty much agree that the human driver is the responsible party. Of course, there are exceptions, such as if the wheels fall off the car because the auto maker made them defective, etc.

With less-than-Level 5 self-driving car, we’re going to have a co-shared responsibility. Now, I’m not necessarily talking about legal responsibility for the moment and would like to just take a look at the more practical technical aspects about this co-sharing arrangement.

Here’s where the rub is. When you put the AI system and the human driver into a co-shared responsibility situation, they each need to “know” what the other is supposed to do. This also needs to be achieved in some practical manner. I say this because many of the AI developers and the auto makers and tech firms are not treating this in a truly practical way.

Let’s consider some of the key ways in which the responsibility can get muddled and focus on the human driver aspects thereof: (See Fig. 5.)

  •         Unclear on task sharing responsibilities

First of all, the human driver needs to be completely and utterly clear about the task sharing duties. Who does what, and when, and why, and how. This is not easy to ensure. Humans, as we’ve seen with the scooters, will tend to ignore warnings and cautions, and do what they think they want to do. Same goes for AI self-driving cars.

  •         Lack of situational awareness

The human driver tends to drift away from the situational awareness that normally occurs when they are fully responsible for the driving task. It is easy to begin to believe that the situation is being handled, and so you don’t need to stay on the edge of your seat. The problem is that when the human driver is needed, it often involves a split-second decision and action, all of which depend on having an immediate situational awareness.

  •         Incrementally growing complacency

Humans tend to get complacent. If the AI seems to be driving, and the more that it happens, the more complacent you become. You aren’t thinking about those instantaneous situations like a tire blowing out or a truck that is mistakenly parked in the middle of the road. Most of our daily driving is relatively routine. The AI generally handles routine driving. Therefore, humans become complacent.

  •         Dulled alertness

Humans alertness gets dulled. You really can’t expect someone to be watching out continuously when they are under the belief that the AI is generally able to drive the car. When you are solely driving the car, you know that you need to be alert. When “someone else” (something else) is driving the car, you let your guard down.

  •         Mental distractions

When you are solely driving a car, you might have some mental distractions like you are thinking about what you are going to wear to that party tonight. But, you still have at the forefront the driving task. When you have the AI doing the driving, you tend to allow mental distractions to takeover more of your mental processing. It’s a natural tendency.

  •         Physical mispositioning for task

Most of the time, if you are solely driving a car, you have your foot on the gas or the brake, you have your hands on the wheel, all of this is continuous. When you are sharing the driving with the AI, you are likely to take your feet off the peddles, and your hands off the wheel. Even if you are timed and need to periodically put your hands on the wheel, this is not the same as continuously having your hands on the wheel.

  •         Confused about Theory of Mind re: AI

This is one aspect that not many are considering, namely, the human driver needs to have a theory of mind about the AI driving capability. This is akin to if I had a novice teenage driver at the wheel of my car. I would generally know what the teenage driver can handle and not handle. Right now, few of the human drivers have a sufficient understanding of the theory of mind about what the AI can and cannot do.

  •         False assumptions about safety

The human driver tends to believe that if an accident hasn’t yet happened while the AI is driving, it probably implies there won’t be an accident. The longer that the AI drives and seems to be accident free, the more this false assumption about safety increases.

  •         Lulled into overreliance

The human driver begins to become over reliant on the AI driving the car. You’ve likely seen the videos of those idiots that put their head out the window of the car and speak to the video camera about how great it is that the AI is driving the car. All it would take is for that car to get into a predicament and that head-out-the-window driver would likely get injured or killed (or, lose their head).

  •         Insufficient reaction time

This is one of the most insidious aspects about a co-shared responsibility. Imagine a teenage driver that was about to ram into a brick wall and suddenly looked at you, the adult sitting in the passenger seat, and told you to take the wheel. Too little, too late. This is what is going to keep happening with today’s AI self-driving cars.

  •         Disjointed from the driving task

Inevitably, the human driver is going to feel disjointed from the driving task. Even if you put all sorts of facial monitoring and eye monitoring, it’s an instinctive kind of aspect that you are no longer solely responsible and so you feel less committed to the driving task. This is human nature.

  •         Unanticipated instantaneous engagement

This idea that the human driver will need to become instantaneously engaged in the driving task, when meanwhile they’ve been only partially engaged, and that it will most likely occur in an unanticipated moment, this is another one of those seemingly impractical notions.

  •         Other

You might already be aware that Google’s Waymo has been keen to develop an AI self-driving car and has tended towards achieving a Level 5. Why? One reason likely is that it removes the co-sharing of the driving task. This eliminates all of the above human frailties and issues of design and development related to getting the human and the automation to work together in a coordinated and blissful manner.

There’s also the aspect that having AI that can fully drive a car without human intervention is a much more challenging technical aspect, since you presumably need to make the AI so good that it can drive like humans can. When you have a self-driving car that has a co-shared responsibility, you can always lean on the human to make up the slack for whatever you could not get the AI to be able to do. Some see this as a half-baked technical solution.

For those auto makers and tech firms in the middle ground, above the low-tech conventional car and not yet at the true Level 5, it’s going to be ugly times for all.

Those that naively think that they can just push technology out the door and expect humans to play along, well, perhaps the scooter wars are a good lesson. You can make seemingly wonderful technology, but those darned humans are not going to abide by what you want them to do. Is this then an excuse for you? I don’t think so. I assert that making a better mousetrap means more than just the technology itself, and that you have to take into account the nature of humans, else in the end it’s going to be a deathtrap. AI self-driving cars are a serious innovation and I hope it’s not going to get undermined, perhaps inadvertently in the rush to get it onto our streets and yet in the end produce untoward results that will kill the golden goose for all of us, society included.

Copyright 2018 Dr. Lance Eliot.

This content is originally posted on AI Trends.