Event Data Recorders for Self-Driving Cars: We Need a Black Box Relook

894

By Dr. Lance B. Eliot, the AI Trends Insider

By now, we all are generally aware that airplanes often have a so-called “black box” that is intended to survive flight crashes and be used to retrieve essential information about what happened during the flight.  There are sometimes more than one such device, each being focused on a particular aspect of a flight. For example, one black box to capture the flight controls data such as speed and altitude, while another black box is used to capture sounds inside the cockpit of the plane to record what the crew was talking about.  

In most cases, the black boxes run on the basis of recording information for a set number of minutes and then write over that recording as new information comes in. Let’s say that a black box had the capacity to record 15 minutes’ worth of information. It would start recording at the beginning of the flight, and when the first fifteen minutes were past, it would get rid of the first recorded minute by overwriting it with the 16th minute, then get rid of the second minute of recording with the 17th minute, and so on. This provides a window of just 15 minutes of what took place, based on any current instant in time. If a plane crash actually took 20 minutes to occur, and if we assume that the black box stopped recording once the plane hit the ground, we would only know about the last 15 minutes of the flight and not have data prior to that point in time (we would not know what occurred in the first 5 minutes of this 20 minute long crash scenario).

There are some black boxes that don’t record anything until the point at which they are triggered to operate. Returning to the scenario about the plane crash, it might be that the black box did not start doing any recording at the beginning of the flight. Instead, when the flight begins to exhibit a problem, it might trigger the black box to operate. Suppose that the right engine of the plane suddenly stops working. This might be a preset trigger to start the black box. The black box then does its recordings. If it has a limited data volume, let’s pretend it only can hold five minutes’ worth of data, it would then have the last five minutes of the flight, assuming too that it stops recording once it hits the ground. At times, the black box might not realize that it hit the ground per se, and so might continue recording even after the crash is completed (which, again, if there is a limited capacity to the black box, could mean that valuable info about the ending of the crash could be lost).

Why all this discussion about black boxes? Because we are going to become increasing reliant on them for self-driving cars, at least that’s one viewpoint, which, as you’ll see next, carries some controversy to it.

At the Cybernetic Self-Driving Car Institute, we are keenly interested in black boxes for cars. They potentially hold the key to understanding what happened in the case of a self-driving car that gets into a crash. Allow me to elaborate.

First, instead of referring to them as black boxes (nowadays they are at times varying in colors, for example orange is a popular color so that it can stand out amongst crash rubble), we call them Event Data Recorders (EDR).

Believe it or not, EDR’s or black boxes for cars have been around for a while (it’s not a new idea to use a black box that normally is used on an airplane to also put one into a car). During the mid-1990s, several prominent car makers opted to start including EDR’s, such as in Pontiacs and Buicks. There was little fanfare about this. Consumers were pretty much unaware of the inclusion of an EDR into their car. It’s not the kind of added feature that car manufactures tout, which makes sense to avoid mentioning because it otherwise brings up the unpleasant notion that someday your car might crash and you’ll be killed in it. But, hey, at least the EDR will survive!  That’s not a stirring reason to rush out to buy a particular brand of car that is equipped with a black box.

Why are they so prominent on airplanes? I think we can all pretty much agree that having a black box for an airplane is something we, the public, believe is for the good of the public. When a plane crashes, and suppose it holds 300 souls, a lot of family members and regulators want to know why that plane went down. If there are no survivors, we have no particular information about what happened on the flight. Even if there are survivors, they might not know what happened. Even if the pilot or co-pilot survives, they might not know, or might misremember, or might (rarely) even lie about what happened.

Not only is the black box handy for understanding a particular crash, it also is helpful for the plane makers. Suppose the black box recording reveals that a model of airplane cannot handle severe rain at high altitudes due to what is found to be a design limitation in the wings of the plane. This would allow the plane manufacturer to redesign the wings accordingly. Or, suppose the plane had problems due to the tail of the plane, and we later found out that the tail was not getting the prescribed monthly maintenance, this would help to guide that all such planes need to rigorously get their needed monthly maintenance.

In short, the black box on a plane helps to understand what occurred during a crash, it can lead to improvements in how planes are designed and made, it can lead to improvements about how planes are maintained. There is also another set of important reasons. Families of those killed in a plane crash want to know whether the plane was safely built and operated. If not, they would legally tend to seek claims against the maker or the operator for damages. Insurance companies also want to know, since they are often backing claims related to having insured the flight.  Regulators want to know since they are concerned about whether commercial plane makers and operators are doing what is safe and abiding by regulations. Plus, they might discover that the regulations are lax and need to be tightened or otherwise expanded.

Do those reasons also apply to the use of EDRs in cars? Some would say yes, those reasons do apply. Though a car crash might only involve one person or just a few people, while an airplane crash might involve hundreds, it is considered likewise important to know what led to the crash. Car insurers certainly want to know. Those harmed in a car crash want to know. There have been many cases of car crashes that involved one or more cars that had an EDR, and the EDR became essential for the facts of the case. Suppose a driver claims they were legally driving at the posted speed. The EDR recording can help to either verify this claim or refute it.

If you are interested in EDR’s for cars, you should take a close look at the U.S. Federal Regulations in CFR Title 49, Subtitle B, Chapter V, Part 563, often referred to as “49 CFR 563” that specifies aspects about EDR’s. Here’s the stated scope: “This part specifies uniform, national requirements for vehicles equipped with event data recorders (EDRs) concerning the collection, storage, and retrievability of onboard motor vehicle crash event data. It also specifies requirements for vehicle manufacturers to make tools and/or methods commercially available so that crash investigators and researchers are able to retrieve data from EDRs.”

How does this apply to self-driving cars?

A self-driving car is considered a car, by federal guidelines, and so it can have an EDR.  Notice that I said it can have one. Allow me to offer an instance of when it would have been very handy to have had one in a self-driving car.

You might recall that I’ve mentioned in several of my columns that the Tesla fatal car crash of 2016 is an important first-case of a car crash involving a self-driving car. The federal investigation that examined the Tesla mentioned this: “The regulation does not require that vehicles be equipped with event data recorders. Equipping a vehicle with an event data recorder is completely voluntary.”

It then also says this: “The Tesla Model S involved in this crash did not, nor was it required by regulation, contain an event data recorder” (for further details, see the National Transportation Safety Board (NTSB) report, dated March 7, 2017, entitled “Driver Assistance System: Specialist’s Factual Report).

No one seems to have gotten upset about this aspect in the Tesla car crash case. There wasn’t any public outrage. Imagine a plane crash that had no black box. I am guessing many would wonder why there wasn’t a black box in that plane. Some have quietly wondered why Tesla does not have a formal EDR. As we get more self-driving cars on the road, and I am not just referring to Tesla, but also thinking about all the car makers, I believe we’ll see more self-driving car crashes, and eventually it is going to become a publicly debated issue, along with pressure on regulators to consider making it mandatory specifically for self-driving cars.

I am singling out self-driving cars over normal human-driven cars. I do this because there is ongoing debate about violating the privacy of human drivers. Some say that having an EDR on a car is a potential violation of your right to privacy. Usually, whatever is recorded by the EDR can only be reviewed under certain circumstances, and indeed about a dozen states say when it can be inspected, such as only with a court order or other special circumstances. Though there might be legal protections, there is also the possibility of hacking into the EDR. People are worried that their private related driving data might be hacked.

Others say that they don’t get why anyone would believe that their driving data is such a big secret. So what that you drove for 50 miles at 40 miles per hours, and made two left turns during that driving journey? Who cares? The counter-argument is that it can reveal where you drove, if it is capturing GPS data. Maybe you don’t want others to know that you drove to the shady part of town. Why should that be anyone’s business other than your own? This debate continues to go on and on.

For self-driving cars, the question arises as to whether the self-driving car has any kind of expected right to privacy. You might say that the AI of the car deserves rights to privacy, but that is kind of a kilter since the AI is not alive (at least not yet, and not for a very long time, if ever, some would say).

You could say that the occupants of the self-driving car are by extension where the privacy issue will arise.  It’s not the self-driving car AI that needs the privacy, but the occupants that do. Those though that are worried about the emergence and safety of self-driving cars would argue that the value of the EDR data for the good of the public and the good of the innovation of self-driving cars should outweigh the privacy concerns of those occupants in the self-driving car.

They also point to today’s cabs and ridesharing services. When you take a taxi today, do you have a right to privacy if that taxi happens to have an EDR in it? Most would say you do not.  But then again, a taxi and owning your own self-driving car could be construed as vastly different situations.

Anyway, let’s get back to the instance of the Tesla crash.

The Tesla has its own proprietary Electronic Control Unit (ECU) that did record some data.  Note though that as stated in the NTSB report, since the regulation does not require having an EDR, and since the ECU is not an EDR, there is not any need for the ECU to have recorded data in accordance with the provisions of a federally regulated EDR: “…the data recorded by the ECU was not recorded in accordance with the regulation,” which is OK since the ECU wasn’t an EDR.

What does this mean? It means that the ECU did not have to record the type of data required by federal regulations for EDR’s. Nor did it have to comply with the formatting requirements. It was entirely up to the car maker to decide what they thought would be valuable to record and how to record it. According to federal regulations, they are perfectly fine to do so. The regulations don’t force them to do otherwise. But, some ask, should we allow such latitude? Would it be better to force the self-driving car makers to abide by the regulations?

Besides wanting to ensure that the type of data that the feds think is important for a crash investigations to then get recorded, it also allows for the feds to directly inspect the data, if the EDR regulation is used.

In the Tesla case, the ECU data was recorded in a proprietary format devised by Tesla. Here’s what the federal investigation said: “…there is no commercially available tool for data retrieval and review of the ECU data. NTSD investigators had to rely on Tesla to provide the data in engineering units using proprietary manufacturer software.”

Thus, the NTSB was unable to directly inspect the ECU data, which was not in the federal standard format, and nor recorded all the same types of data required by the federal government, but for which was perfectly fine for Tesla to have done.

Why does this make a difference?

Looking at another part of the NTSB investigation, here’s something of interest: “The recorder data that NTSB investigators received from Tesla does not provide information about the activation of the Speed Assist system. NTSB investigators cannot determine whether the system (1) was available (i.e., no turned off), or, (2) activated during the trip on which the crash occurred.”

As you can see, the NTSB had to rely on whatever Tesla was able to tell the NTSB what was there or not there. I am not implying anything was afoot. I am just saying that likely it would be generally preferred by any car crash that involves any car maker, whether it be Tesla, Ford, Toyota, or anyone, for the NTSB itself to be able to inspect the recorded data. I am betting that the public would likely want to see this occur.

Allow me to though also poke holes at the existing federal regulations. The 49 CFR 563 is woefully inadequate for self-driving cars, in my humble opinion. It requires that only 15 key variables be tracked, such as speed, airbag deployment, brakes application, etc.  These are certainly essential, but insufficient. There are 30 additional variables that are encouraged to be collected, including GPS, but it is not mandatory. Even these additional variables though are not advanced enough for self-driving cars. In other words, the advent of self-driving cars should get us to augment and revise the requirements to accommodate more kinds of data and the nature of the data to be kept.

Worse still, the EDR data only needs to have twenty seconds of data involving the crash, according to the federal regulations. This is very insufficient. In the olden days, having electronic memory in a black box was not easy to do, because it was costly to have hardened memory that could withstand the harshness of a crash, and so the time limit was allowed to be very low. Plus, the amount of space required for memory used to mean that the black box would need to be larger and heavier.  This is no longer the case.  

As you can see, I am a strong advocate for requiring EDR’s on self-driving cars, and forcing the self-driving car makers to abide by the federal regulations, along with wanting to get the regulators to increase and modernize the federal regulations on this.  I am certainly torn somewhat because I am leery of overly regulating an industry, and especially one that is going through rapid innovation. But, it seems to me, asking for a more robust black box is not overly burdening the self-driving car makers. It actually would in-the-end probably be better for self-driving car makers. When some self-driving car goes bad, it would help to point at that particular car maker, which otherwise it might be that we’ll assume that all self-driving cars and car makers are bad.

Plus, it might actually protect the reputation of a self-driving car, in the sense that suppose a human driver collides with a self-driving car and claims that the self-driving car made the wrong decision. We might not know from the accident scene what really happened, and by having the EDR we might be able to piece things together, and suppose it wasn’t the fault of the self-driving car then we’d have proof.

Which is going to be stronger proof, a human driver of a conventional car or a self-driving car? If we don’t have any proof of what the self-driving car was doing, people will probably side with the human driver. On the other hand, if we have the word of the human driver versus the cold, hard facts of the EDR, we’d probably no longer think of it as a “he said, she said” situation, and presume that the EDR was the right indicator of what occurred.

I want to add some caution here though, which as a technologist and AI developer, I can say that we should not blindly believe whatever the EDR says. Suppose the EDR was not properly recording? There could be some flaw in the EDR software. The sensors of the self-driving car might be falsely reporting data. And so on. It could also be something intentionally done by whomever programmed the EDR to do the recording in that maybe they purposely wrote it to omit something or transform something.

All I am saying is that simply having an EDR is alone not the end-all, and for any circumstance we would need to also ensure that we can believe what the EDR claims to have recorded. Think of the movie “Minority Report” and you’ll get my drift on this.

For those of you that are on your toes about self-driving cars, I am sure you might be thinking that the era of the EDR or black box is now over, since we can do over-the-air communications with a self-driving car. Those of you that are new to self-driving cars might not realize that many of the self-driving cars are sending data via usually the Internet or something comparable to the servers of the car maker (or other). Thus, there is data inside the self-driving car, and also data that is sent from the self-driving car to an external remote server.  This is often done continuously during a driving journey.

We presumably don’t need to dig in the rubble of a car crash and can instead go to those that control the servers and ask them to give us the data about what was going on inside the self-driving car. Not so fast!

As per the NTSB report of the Tesla crash, they summed up the concern about this reliance as follows: “In general, data stored on-board the vehicle will contain information additional to that contained on Tesla servers. Specifically, any data stored since the last auto-load event will exist only on the vehicle itself…”

In essence, the servers by any car maker are likely to not have all of the same data as was recorded by the EDR. It could be that there wasn’t an over-the-air connection to allow the EDR to push its data over to the servers. It could be that the server refresh only occurs every say hour, or maybe only when triggered in certain situations. It could be that of the say 15 types of data that the EDR only pushes over 10 of the required types of data. There are a thousand reasons that whatever is on the server might not reflect the data that was recorded by the EDR.

This, though, brings up another reason to consider updating the federal regulations, since we presumably should have some means of identifying what such recording servers should or should not contain, which right now is without regulatory guidance (again, I am not saying we should burden the car makers, just offering that at least let’s consider what makes reasonable sense).

If you are an AI developer like me, I would want to have as much data recorded by both the EDR and pushed over to the server as I could, which would then allow me to more thoroughly be able to discover what happened in a self-driving car crash. If the EDR gets destroyed by the car crash, it would be great to have as much as possible on the server to take a look at. Even if the EDR survives, but if it has more timely data or more extensive data, I’d want to look at both the EDR and the server. Also, suppose the EDR data gets partially corrupted, I could compare to what’s on the server. Or, suppose the data when sent to the server gets corrupted, I could verify what’s on the server to what the EDR says.

Of course, not everyone shares that same desire for having as much data as possible. We’ve already discussed here that there are privacy concerns. In addition, some wonder whether having all this data is good or bad for self-driving cars. Maybe having too much data is bad. Maybe it will become a witch hunt whenever there is a self-driving car crash. Those that want to stop the advent of self-driving cars might try to use the EDR and server data in untoward ways. Do lawyers want this data to exist or not? Do insurers want it to exist or not?

Big questions. At least we should be discussing it. Right now, nobody is paying any attention to the topic of EDR’s for self-driving cars. It barely registers as even a topic for human driven cars. I hope that I have sparked some interest in the topic overall, and especially for self-driving cars.

Like it or not, I am betting we’ll all eventually be discussing this topic once self-driving cars are readily available, and we are seeing regrettably self-driving car related crashes and incidents, for whatever reasons, and then people will ask, hey, what about those black boxes.

This content is original to AI Trends.