First Salvo in Class Action Lawsuits for Defective Self-Driving Cars

310

By Dr. Lance B. Eliot is the AI Insider for AI Trends and contributes regularly.

Cars, can’t live without them, can’t live with them (if there are onerous defects).

As an expert witness in court cases involving computer systems, and formerly an Arbitrator for the American Arbitration Association on their Computer Disputes panel, I want to take you into the world of computer related lawsuits as they emerge in the AI and self-driving realm. Get ready for quite a ride. We’ll start by considering major class action lawsuits in the automotive realm, beginning with some whoppers that were not about computers but instead involved various kinds of automobile equipment and car design related defects.  This lays a handy foundation for the newly emerging lawsuits that involve AI in cars.

Do you remember the famous case of the Ford Motor Company scandal over the Pinto cars that seemed to ignite on fire when struck at the back of the car where the gas tank was mounted? That was in the 1970s and eventually involved a class action lawsuit, during which it was revealed that Ford knew about the problem but opted to do nothing since it was calculated to be cheaper to pay out claims rather than get the problem fixed. Executives eventually were criminally indicated for negligent homicide. It was a huge story for several years. Today, mentioning “Pinto” invokes a kind of keyword or implied suggestion that you are referring to a potentially severe defect and can be applied to any kind of product.  Watch out for that washing machine, it’s a Pinto – which some reporters exclaimed when last year there was the case of a brand of washing machine that went a kilter and the internal spinning parts flew apart during normal use.

If you weren’t around in the 1970s and haven’t ever heard about the Ford Pinto, I offer the case of the Ford Explorer SUV that was prone to rollovers in the year 2000. At first, critics said that it was the overall design of the SUV that made it defective. Presumably, the car was slightly lopsided by its design and so upon particular driving maneuvers it was easily topple over. Imagine when you try to balance an object on its edge and there is too much weight toward one side or the other. The National Highway Traffic Safety Administration investigated and they said it was the tires. Firestone made the tires and all of sudden they had the bright light of accusatory defects on them. It was a mess and class action lawsuits were involved.

One more famous case that’s even more recent involves the Toyota Lexus scandal in 2009. Some people died when the Lexus would occasionally seemingly go out-of-control. Initially, Toyota claimed that the root cause was the floor mats. Their theory was that the floor mat would inch up toward the floor pedals and jam-up the braking and accelerator pedals from aptly being able to work.  Doubts were cast on this theory. Ultimately, during the class action lawsuit, Toyota admitted that it might also be that there was a defective problem with the accelerator pedal.  At times, the “sticky pedal” would remain affixed in a given depressed level and was not readily budged by the human driver. In 2014, Toyota paid a $1.2 billion dollar fine and admitted that they had misled consumers, they had concealed the problem, and they had made deceptive statements about what they knew and what the problem was.

Why this history lesson about cars, defects, and class action lawsuits? Because we are now entering into the age of self-driving car defects, along with the equally ubiquitous class action lawsuits to go along with the matter.  Indeed, recently the first such salvo was launched when a class action lawsuit against Tesla was filed.

Let the battle begin. There are some very hungry class action lawyers that would love to get some dough out of the bonanza of self-driving cars by going after the self-driving car makers. The bigger the car maker, the juicer the target. I am guessing that class action lawsuit attorneys have a dartboard setup in their offices and that the name of each of the self-driving car makers are shown at various positions on the board. At the center of the target board are the biggest auto makers. It’s like the old joke about why the bank robber robs banks, and the answer is because that’s where the money is. Going after startups that are making self-driving cars is not very smart and nor lucrative. The anticipation by these cagey lawyers of the big auto makers rolling out self-driving cars is like a tiger ready to pounce on its prey. Tesla right now is the best such target because it has the most self-driving car related vehicles in the hands of consumers, and they are rich enough as a company to make it worthwhile to go for the big bucks out of them.

I would like to add that this is much more than just ambulance chasing. We definitely have self-driving car makers that are not taking safety seriously (as I have emphasized in numerous of my pieces on AI and self-driving cars). I have repeatedly exhorted the self-driving car makers to put due attention toward safety. Most are still not listening. Most are blindly pushing ahead with the “fundamentals” of getting the AI to simply drive a car, and aren’t as worried about safety issues. A lot of the software developers are also of the types that think of safety as an after-thought. For them, until a self-driving car demonstrably exhibits safety issues and actually harms or kills, only then will the light bulb come on that maybe they should devote serious attention to safety.

Let’s take a close look at the class action lawsuit filed against Tesla regarding their self-driving car capabilities. Keep in mind that when I say self-driving car capabilities, there are five defined levels of self-driving cars and that we are still not anywhere near the topmost sophisticated Level 5. Right now, self-driving cars are around Level 3. I mention this because if we are now having class action lawsuits, I can readily predict that once we actually get to Level 5 that we will then have a torrent of such lawsuits. The more the AI on-board, the greater the chances of defects and defective actions by the self-driving car.

In this first salvo, the lawsuit claims that Tesla provided a nonfunctional Enhanced Autopilot AP2.0 capability. For those of you that aren’t devotees of Tesla, you might not be aware that around October 2016 there was an effort by Tesla to provide new features for their Autopilot that they referred to as AP2.0. It cost around $10,000 and was said to include 8 surround cameras, 12 ultrasonic sensors, and software that would be greatly improved over AP1. AP2 was supposed to provide or enhance the active cruise control, lane holding, collision warning, automatic emergency braking, and other nifty features.  These are often referred to as Enhanced Autopilot (EAP) and Full Self-Driving (FSD).

These were marketed by Tesla, as easily proven by looking at their ads plastered on billboards and web sites. What the class action lawsuit claims is that:

  1. many of these features were delivered later than promised,
  2. many of these features have never been provided,
  3. many of the provided features do not do what was promised,
  4. many of the provided feature are defective.

The first two claims, namely that Tesla was late in providing a feature or that it has not yet provided a promised feature, those are more so claims about the potential misleading of consumers. This involves showing that Tesla promised something and should be dinged because the consumer didn’t get it when promised or has never received it. The marketplace has often let innovators get away with this kind of thing, and we’ve seen firms like Apple that had made promises for new technology and then didn’t quite deliver on-time. This is bad, certainly, but not as bad in a sense as perhaps the other two claims. Not getting something that you paid for is bad, yes, but as you’ll see in a moment, getting something that you paid for and if it is not working right, or worse if it works wrong, that’s the real hot water.

Allow me to emphasize that I am not letting Tesla or any self-driving car maker off-the-hook if they make a promise for delivering features and do not do so.  It’s a typical dirty trick to try and convince consumers to wait and buy their product, creating doubt about getting a competing product that does not have those features. I think such hollow promises do need to be kept, and that anyone making such promises needs to pay the penalty for false promises.

I also don’t ascribe to the viewpoint that “they are innovators and so we can’t complain when they are unable to deliver on new innovations” kind of mindset. Many that are beloved Tesla buyers say that they aren’t upset when Tesla makes a promise and delivers late, since they are so devoted that they are willing to overlook such a guffaw. Genius takes time, they will offer defensively. I say hogwash. Promise, and keep to your promise. If you can’t accurately predict when you are going to deliver then you have no business making a promise, and else you must be accountable to what you pledged.

In terms of the other two claims, notably that the provided features don’t work as promised and that in some instances are defective, these are quite serious since it can make a life-or-death difference for those using the Tesla cars that have AP2.0.

Here’s some of the accusations:

  • Essentially unusable
  • Demonstrably dangerous
  • Erratic
  • Buyers became beta-testers
  • Half-baked software
  • Behaves as if a drunk driver is at the wheel
  • Not effectively designed
  • Not safe
  • Not “stress-free”

These are obviously quite serious accusations. The assertion that some of the features are erratic and demonstrably dangerous are clearly of great concern. Engaging the Autopilot would presumably be done under the assumption that the capability is well-tested and works properly and safely. The lawsuit points out that Tesla had even promised that using the Autopilot AP2.0 would make driving “stress-free,” which is a bit of marketing hyperbole and we’ll have to see whether the court considers it as a true promise or something weaker and less binding.  I might tell you that the new tires on your car will make your driving stress-free, and the question arises as to whether that’s an outright promise or not, and what it means to be stress-free (can we ever be truly stress-free, a philosopher might ask?).

The suggestion that the AP2.0 drives like a drunk driver is especially interesting. In my prior columns, I have tried to debunk the notion that by adopting self-driving cars that we’ll reduce to zero any car related deaths due to drunk driving. I have pointed out that though we might reduce the human drunk drivers, we are still faced with AI that might have flaws, errors, bugs, or omissions that cause it to sometimes be as lousy a driver as a drunk driver. I guess these lawyers must be reading my pieces!

Like any class action lawsuit, the legal action is intended to cover those consumers that would have been potentially impacted by the claims. In this case, the lawsuit seeks to encompass any Tesla buyer that within the designated time frame had either bought or leased certain models of the Tesla car brand. Sought are the economic losses to those Tesla impacted owners due to the alleged false promises. What makes this claim a bit less biting is that there aren’t actually Tesla owners that were directly injured or died due to these alleged false promises.  I mention this because it is much easier to get a win if you have some visceral and dramatic actual injury damage that has occurred, rather than just trying to show that a promise was broken and that somehow the car owners were less safe or more stressed.

Imagine if there were Pinto cars that had not at first exploded or caught on fire, and if someone had figured out beforehand that there was a potential for danger and death due to the design. Launching a class action lawsuit to say that there is the potential for a Pinto to be a killer is one thing, but doing so after there have been actual cases is another. Courts seeing pictures of burning Pintos and grieving family and relatives makes for a more compelling case.

Whenever you file these kinds of class action lawsuits, you have to have someone specifically that was considered to be impacted by the claim and they must be explicitly named in the lawsuit.  You can’t just file in general and say that someone somewhere might be impacted. The named claimants then are shown to be specific examples and then you make the case that anyone else can be logically considered equally so impacted. In this case, there are 3 named plaintiffs, each of which allegedly bought the in-scope Tesla, and it says they paid anywhere from $81,000 to $113,000 for the cars.

Usually, the named plaintiffs aren’t going to get incredibly enriched by these cases, which we think of as being true because of wild cases like having hot coffee spilled on you are at your local fast food chain.  Mainly what happens is the plaintiffs get something, the members of the class get something, and usually the lawyer get a hefty payout.  I know it easy to be cynical and say that these lawsuits are only about the lawyers getting rich.  The other side of the coin involves the aspect that the lawyers usually take these cases without any upfront promise of getting paid, and so they are at risk of making nothing on the case, which could require tons and tons of work on their part. Also, one could say that they are “crusaders” in that they force companies to relook at what they do and can therefore have a positive impact on getting car makers to be more serious about safety.  That being said, I was not pressured into saying this by any of the attorneys that I know, and I am merely trying to point out that there are ambulance chasers and there are also do-gooders.  You decide what’s the circumstance in this instance.

What is especially alarming in this Tesla case involves some anecdotes that were included in the filing. Such anecdotes need to be taken with a grain of salt since they were apparently plucked from the Internet (see my piece on AI self-driving fake news).  Anyway, one anecdote said that their Tesla with AP2.0 supposedly was going 50 mph and the radar spotted a bridge ahead, which then caused the Autopilot to slam on the brakes. This goes back to my other columns about how we need to be mindful that the sensors of self-driving cars can produce false readings, and that the AI software might also take even true sensory data and falsely make mistakes about interpreting it or taking inappropriate actions. This goes to the issue of making the AI software more robust and having safety controls all throughout it.

The attorneys for the lawsuit are asserting that Tesla marketed proverbial smoke-and-mirrors to consumers. Fortunately, so far, no one seems to have yet been physically injured or died due to the claimed smoke-and-mirrors, though in a sense this is “unfortunate” for purposes of this legal case. It is unfortunate for the legal case because it will be harder to show actual damages. It is also unfortunate because Tesla may or may not take the case as seriously otherwise. And it is also unfortunate because the other self-driving car makers might also take it as less serious matter. Indeed, some of the auto-makers might simply tone down their marketing claims. This is helpful, but if they are still in the backroom churning out flawed AI software for their self-driving cars, then they are only dealing with solving the appearances rather than the substance of the problem.

As I have previously noted about Tesla, the darling of the industry automaker dodged a bullet with the case last year involving the human driver that had the Autopilot on and he ran into a truck, killing him, which the federal investigation put the blame on the human driver (since, in theory, Tesla continues to say that it is the human driver that must be aware at all times and holds the final responsibility for the driving of the car).  We’ll need to see whether this newly filed lawsuit can actually qualify for a class action (it has to show proof of class nature), and see whether the claims can be proven in court. Tesla might settle in some fashion beforehand, especially if it wants to avoid having to air any potential dirty laundry, plus if it wants to possibly avoid a public relations backlash during the lawsuit.

For me, I want this case to hopefully be a wake-up call for the AI community and those AI developers participating in the grand experiment of creating self-driving cars. It is exciting to think that in our generation we will have self-driving cars, and somehow achieve a Jetson-like accomplishment akin to everyone having jet packs. AI developers need to take this desire to be first with self-driving capabilities as also a responsibility to make them safe. I realize that for those of you software engineers tolling away at a self-driving car maker that you might say that upper management doesn’t care about the safety and you are under pressure to churn out the other code. Not sure if that’s going to be a valid excuse down-the-road, once that self-driving car you helped code injures or kills. You might be faced with civil legal action, criminal legal action, and your own sense of humanity and whether you did what you could and did the right thing.  AI and self-driving cars are fun, but also involve life-and-death.

This content is original to AI Trends.