Product Liability for Self-Driving Cars: Looming Cloud Ahead

739

By Dr. Lance B. Eliot, the AI Insider

There is a lot of glee these days in the halls of self-driving car developers. We are on fire. Everyone is breathlessly awaiting the next iteration of self-driving cars. Demand for self-driving car developers is extremely keen. Training classes on software development for self-driving car systems engineering are packed to the gills. Car makers are excited to tout their concept cars and also make claims about when their next self-driving car version will hit the streets. Money by the venture capital community is flowing to self-driving car startups. Media loves to run stories about the self-driving cars that are on the horizon.

What a wonderful time to be alive!

The lawyers are also gleeful. With all this money flowing into self-driving cars, we’ve seen fights on the Intellectual Property (IP) frontlines (see my column on the topic of IP lawsuits for self-driving cars). The IP stuff is the most obvious target right now for legal wrangling.  We’ve also seen a recent lawsuit against Tesla that their promised capabilities of Autopilot 2.0 were allegedly slipshod and not what was represented (see my column on this topic). The Tesla lawsuit is going to be a big eye opener for the entire self-driving car marketplace.  It will though take probably several years for the lawsuit against Tesla to play out, and so until we see that Tesla somehow takes a big hit if they lose on it, the lawsuit will be nothing more than a small blip on the radar for self-driving car makers.

We are now though entering into a much bigger frontier for lawsuits in the self-driving car arena. Let me give you an example of what’s coming up soon. Audi is bringing out their new model of their Audi A8, and it is claimed to have Level 3 self-driving capabilities (see my article on the Richter scale of self-driving car levels). Up until now, we have essentially had Level 2 self-driving capabilities, though some have argued that Tesla’s Autopilot is somewhere between Level 2 and Level 3, and some even say it is Level 3 (this is quite debatable).

According to Audi’s own press release: “Audi will introduce what’s expected to the world’s first to-market Level 3 automated driving system with “Traffic Jam Pilot” in the next generation Audi A8. The system will give drivers the option to travel hands-free up to 35 mph, when certain conditions are met prior to enabling this feature — for instance, the vehicle will ensure it is on a limited-access, divided highway.”

The debut of the new Audi A8 with so-called Level 3 capability is slated for July 11. Furthermore, it will be featured in the early July released new Spider-man movie, and will showcase that the driver of the car has taken their hands off the wheel, allowing the Audi A8 to drive itself.  This blockbuster movie will potentially help push ahead the Audi A8 in the race to bring self-driving cars to the market. Movie goers will presumably rush out to buy the latest Audi A8. And, their expectations of other car makers will continue to rise, wanting to see similar features on their Ford, Nissan, Toyota, and other cars, else they won’t buy any new cars from anyone other than Audi.

Lawyers will likely go to see the new Spider-man movie in droves, wanting to witness the first piece of evidence that will be ultimately held against Audi. What do I mean? I am saying that with great guts goes great risks. Audi is setting itself up for what could be the blockbuster bonanza of self-driving lawsuits. They are certainly a big target since they are a big company (owned by the Volkswagon Group, consisting of Audi, Porsche, Volkswagen, Bentley, Bugatti, Lamborghini, SEAT, Skoda).  Their deep pockets make them a tremendous target for any attorney that wants to become rich.

I am talking about a hefty product liability lawsuit. Audi is making the brash claim that their Audi A8 is going to be Level 3. Will consumers understand what Level 3 actually entails? Does Audi even understand what Level 3 entails? Can Audi try to squirm out of any claims that consumers that bought the car really comprehended what Level 3 involves and/or what Audi meant by saying level 3? Let’s take a close look at this and explore what product liability is all about.

To make a bona fide defective product liability claim, you need to meet one or more of the following criteria (note: I am referring to U.S. law, and so other countries will differ; also, I am not a lawyer and I am not providing legal advice, so this is my disclaimer and you should go get an attorney if you believe you have a product liability claim):

  1. a) Defective manufacturing of the product
  2. b) Defective design of the product
  3. c) Failure to adequately forewarn about proper use

Let’s discuss each of these three aspects.

In the case of a defective manufacturing of a product, this is when a product was fouled-up during the manufacturing process. The product might otherwise have been designed correctly for what it is intended to do, and it might have properly forewarned about how to use it, but during the manufacturing there was a screw-up and so the product that you received had a defect. Thus, it was imperfectly made.  If the imperfectly made product then got you into trouble, such as if it was a tire for your car and during manufacturing the tire maker messed-up and the tire had an internal rip, and if then that particular tire upon your car exploded while driving it, you’d have a strong case to win a lawsuit against the tire maker.

In the case of a defective design of a product, this is when a product was designed improperly and regardless whether it is manufactured correctly or not, and regardless of whether you were forewarned about the proper use, its design was wrong to start with. In other words, the product is inherently dangerous, even if it were made perfectly. Suppose you buy a pair of sunglasses and it failed to protect your eyes from UV rays, you might well have a case that by design the sunglasses were dangerous. This aspect can be only pushed so far, since if the item is already dangerous in some manner, like say a meat cleaver, you will have a hard time saying that when it cut off your finger that you believe the meat cleaver by design was unreasonably dangerous. The counter-claim is that by-gosh it’s a meat cleaver and so it is apparent that the product would be dangerous.

In the case of failure to forewarn, this involves products that in some way are dangerous but the user of the product was not adequately made aware of the danger. The product has to not be so obvious as to its danger, such as again a meat cleaver is pretty apparent that it has inherent dangers – probably not needed to forewarn that you could cut off your fingers, though a product maker is wise to make such warnings anyway. If a product should only be used in certain ways, and the user should exercise care or special precautions, if those precautions aren’t called out by the maker than the product user could go after them saying that they (the user) was not aware of the dangers involved.

Now that we’ve covered the three major aspects of product liability, let’s see what it takes to try and have a bona fide product liability claim that some product is defective. First, you have to actually in some way become injured by the product and it must be a contributing factor of that injury. Simply theorizing that you could get hurt is rarely sufficient. You must have actual damages of some kind. And, the damages must be tied to the product. If the damages done to you weren’t related to the product, it will be tossed out since you weren’t harmed by the product, even if it turns out that the product is truly defective in some manner.

You also must show that you were using the product as intended. A meat cleaver is intended to carve meat. If you decide to start your own juggling act and toss in the air a meat cleaver, and if in so doing it chops off your hand, this really wasn’t the intended use of the meat cleaver. We wouldn’t likely expect the meat cleaver maker to have forewarned that you should not use the product for purposes of juggling. There are often way too many bad uses of a product to enumerate them all. The focus usually comes back to whatever is the reasonable intended use of the product, as per what a reasonable person would tend to believe it should be used.  Of course, whatever the maker of the product says can have a big impact on that aspect.  If the maker of the Joe’s meat cleaver advertises that their meat cleaver is for cutting meats and for juggling, they have opened the door to a usage that other meat cleaver makers would not be likely as liable for.

Are you comfortable now that you are up-to-speed about the rudiments of product liability?  I hope so, because now we’ll take a close look at self-driving cars. We’ll use Audi as a case study of what might happen to them, and this is applicable to all other self-driving car makers too.

Audi is claiming that the Audi A8 will have a Level 3 capability. In this instance, they are saying that when the car is in heavy traffic, and on an open highway, with a barrier dividing it from opposing traffic, the Level 3 feature can be engaged to drive the car. During which, the human driver can presumably take their focus off the driving task. It has been reported that the Audi might even have an ability to let the user read their emails or watch a video on the dashboard display. At the same time, it is indicated by Audi in their press release that “Driver Availability Detection confirms that the driver is active and available to intervene. If not, it will bring the car to a safe stop. The vehicle ensures that the road is suitable for piloted driving by detecting features of the surroundings like lane markings, shoulders, and side barriers.”

Let’s now revisit our three criteria, namely whether a product is defective in its manufacture, whether it is defective in its design, or whether it is defective in forewarning the product user.

One problem for Audi will be whether the car when made actually has no problems in terms of the sensors that contribute to the capability of detecting the surroundings of the car. Any manufacturing defects in that will come to haunt them if someone while using the Level 3 feature gets injured in a car accident. Likewise, if the AI software that carries out the Level 3 has any inherent manufacturing issues (suppose there are bugs in the software), they are looking at a potential product liability claim if someone is injured as a result of that bad manufacturing. And so on for any of the hardware and software components that comprise the Level 3 feature.

Or, it could be that the design itself is flawed. The aspect that they are going to try and ascertain that the driver is still attentive might be so insufficient and weak that it is unable to adequately detect that the user is not paying attention to the road. Or, as I have stated many times in my columns, suppose that the system does not allow sufficient time to alert the human driver to intervene. If the human driver gets just a split second to react to something that has gone beyond the Level 3 capability, and if in so notifying the human driver belatedly that then the car gets into an accident and the driver or passengers are injured, you could claim that the design of the Level 3 was defective.

In terms of the failure to forewarn, I am guessing that the Audi A8 manuals that come with the car will probably try to warn the driver about being careful and mindful when using the Level 3 feature. But, how much of an emphasis there will be could undermine Audi if it is some kind of watered down cautionary explanation. Furthermore, if the manual is not readily available to the driver, such as it is stored in a glovebox and there’s no expectation of it being read, this also opens up Audi. You can also bet that the dealerships selling these cars are going to hype the Level 3 feature. As such, I would guess that the buyer of the car will think it does more than what Audi as a manufacturer believes, but that the dealers will be making bolder claims. Likewise, the advertising for the car will further bolster the suggestion that the car maker has not sufficiently forewarned about the limits of the Level 3.

I already am in disagreement about the claims that some have been making about the Audi, namely it seems like they are saying that the Level 3 definition means that the car handles all aspects of driving but the expectation is that the human driver will respond to a request to intervene.

They are probably referring to this part of the Level 3 SAE definition: “DDT fallback-ready user (while the ADS is engaged) is receptive to a request to intervene and responds by performing DDT fallback in a timely manner.” The ADS refers to the Automated Driving System, and the DDT refers to the Dynamic Driving Task.  The definition is essentially saying that indeed there must be a human driver that must be ready to intervene and is expected to intervene when alerted by the self-driving car.

But, there is more to this Level 3 definition and I think some of the fake news out there is misreporting it (see my article about fake news and self-driving cars).  The SAE Level 3 also says this: “DDT fallback-ready user (while the ADS is engaged) is receptive to DDT performance-relevant system failures in a vehicle systems and, upon occurrence, performs DDT fallback in a timely manner.”  This portion suggests that the human driver is supposed to in-tune to what the self-driving car is doing, and if the human driver suspects that something is amiss, they as a human driver are supposed to take over the driving controls.

Allow me to make this even clearer, by citing another section of the SAE about Level 3: “In level 3 driving automation, a DDT fallback-ready user is considered to be receptive to a request to intervene and/or to an evident vehicle system failure, whether or not the ADS issues a request to intervene as a result of such a vehicle system failure.”

Why am I so insistent on the wording of the Level 3 definition? Here’s why. Suppose the human driver of the Audi A8 is led into believing that they can take their eyes off the road, and look at their email, and that they only need to be ready to respond to the self-driving car telling them to take over the controls. Even if we make the rather wild assumption that the self-driving car would tell them to takeover in time to make a decisive and correct action for a pending accident, the issue here is that according to the true definition of Level 3 that the human driver is supposed to somehow know and be ready to take over the car regardless of whether the AI of the car forewarned them or not.

I am betting that by-and-large the human drivers are going to assume that they only need to be ready to take over when notified explicitly by the self-driving car. They will be lulled into believing that the self-driving car knows what it is doing, and that it will prompt them when it is time to takeover.  Imagine too the confusion by the human driver that they maybe do notice that the Audi is veering towards danger, but let’s assume the Audi’s AI system doesn’t realize it, and so it is not alerting the human driver.  The human driver might be bewildered as to whether he or she should take over the control of the car, and be under the momentary stalled position of not taking over because the self-driving car itself has not said that they do need to take over the control.  Am I right to take over the controls when the self-driving car has not told me to do so?  You can see this flashing through the mind of the human driver.

Though you might think I am splitting hairs on this, you’ve got to keep in mind that eventually there is going to be someone driving this alleged Level 3 self-driving car and will get into a car accident that injures or kills someone. An enterprising lawyer will possibly try to find a means to link the aspect that the Audi has a Level 3 capability to the particulars of the accident. This might be something conjured up by a creative lawyer trying to make the case for product liability, or it might be a true aspect that Audi failed to cover as part of the standards of product liability.

As part of a lawsuit, I can envision that there will be expert witnesses involved (I’ve been an expert witness in cases involving IP in the computer industry), in which these experts will be asked to testify about what the Level 3 capabilities do, how they were developed, the extent of how they work, what kind of testing was done to ensure the features work as claimed, etc.

There will also be a look at how the human driver has been forewarned about the Level 3 capabilities. What does the Audi user manual say? How easy or hard is it for the human driver to have read the user manual? What was the human driver told by those that sold them the car or rented the car to them? Were they given adequate instructions? Did they comprehend the limitations and what their role is in the driving of the car?

Get yourself ready for the upcoming avalanche of product liability lawsuits regarding self-driving cars. It hasn’t happened yet because we are still early in the life cycle of rolling out cars that are used by the public and that have Level 3 and above capabilities. We are still in the old-days of cars that we expect the human driver to be entirely responsible about the driving of the car. With the Level 3 and above, we are blurring the distinction and entering into a new murky era. Where is the dividing line between the human driver responsibility and the self-driving car responsibility? What is the duty of the car maker about identifying that line and making sure that the human driver knows what that line is?

There are those that firmly believe that self-driving cars can do no wrong, or that we must as a society accept that some of the self-driving cars that hit the roads will lead to unfortunate deaths or injuries, but that this is the price we need to pay to ultimately have true Level 5 self-driving cars (under the misguided belief that Level 5 self-driving cars will ultimately eliminate all deaths and injuries, which I’ve debunked in my columns).

Reality is going to hit those proponents in the face when we have humans getting injured or killed, and when the lawyers say hey, you self-driving car developers and makers, you can’t just toss these things into the hands of humans, you need to own up to being responsible for these things.  I do dread that day because it will likely dampen the pace of self-driving car advances. On the other hand, I keep exhorting to our industry that we need to put at the head of all this the safety aspects of what we are creating in these self-driving cars.  Safety first.  Drive safely out there.

This content is original to AI Trends.