NTSB Releases Initial Report on Fatal Uber Pedestrian Crash; Dr. Lance Eliot Seen as Prescient

1423
From NTSB report, Uber self-driving system data playback from the fatal, March 18, 2018, crash.

The National Transportation Safety Board on May 24 released its preliminary report for the ongoing investigation of a fatal crash involving a pedestrian and an Uber Technologies, Inc., test vehicle in Tempe, Arizona.

The modified 2017 Volvo XC90, occupied by one vehicle operator and operating with a self-driving system in computer control mode, struck a pedestrian March 18, 2018. The pedestrian suffered fatal injuries, the vehicle operator was not injured.

The NTSB’s preliminary report, which by its nature does not contain probable cause, states the pedestrian was dressed in dark clothing, did not look in the direction of the vehicle until just before impact, and crossed the road in a section not directly illuminated by lighting. The pedestrian was pushing a bicycle that did not have side reflectors and the front and rear reflectors, along with the forward headlamp, were perpendicular to the path of the oncoming vehicle.

The pedestrian entered the roadway from a brick median, where signs facing toward the roadway warn pedestrians to use a crosswalk, which is located 360 feet north of the Mill Avenue crash site. The report also notes the pedestrian’s post-accident toxicology test results were positive for methamphetamine and marijuana.

In its report the NTSB said Uber equipped the test vehicle with a developmental, self-driving system, consisting of forward- and side-facing cameras, radars, Light Detection and Ranging, navigation sensors and a computing and data storage unit integrated into the vehicle. The vehicle was factory equipped with several advanced driver assistance functions by the original manufacturer Volvo Cars, including a collision avoidance function with automatic emergency braking as well as functions for detecting driver alertness and road sign information. The Volvo functions are disabled only when the test vehicle is operated in computer control mode.

The report states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.

At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

In the report the NTSB said the self-driving system data showed the vehicle operator engaged the steering wheel less than a second before impact and began braking less than a second after impact. The vehicle operator said in an NTSB interview that she had been monitoring the self-driving interface and that while her personal and business phones were in the vehicle neither were in use until after the crash.

All aspects of the self-driving system were operating normally at the time of the crash, and there were no faults or diagnostic messages.

AI Trends contributor Dr. Lance Eliot, CEO, Techbrium Inc., stated at the time of the accident that instead of the sensors not working properly, as many experts originally suggested, “The greater likelihood was that the AI system was not able to make a decision in time to make a difference.”

He wrote about this in his column on The Initial Forensic Analysis of the fatal Uber crash. He later wrote on the need for developers to devote more attention to the Cognitive Timing of the AI systems in self-driving cars.

After release of the NTSB report, Dr. Eliot stated to AI Trends, “I had said that at most it could have first been aware at 8-10 seconds, but that it was probably more towards half that time, and that the reason there wasn’t any immediate evidence of the car slowing or braking was that it probably took too long for the AI to decide what was occurring and what action to take.

“Now, that being said, there’s a twist that no one knew, namely that Uber had made the decision that whenever an emergency braking situation were to arise, the AI was to hand the situation over to the back-up driver. In this case, the back-up driver made an attempt to do something with about 1 second left to go, but it was already too late.”

In another column, Dr. Eliot even anticipated the dangers of relying on Back-up Human Drivers for AI self-driving cars.

Expressing a little frustration, Dr. Eliot stated, “I feel like the boy that cried wolf, but in this case, there really are wolves and nobody seems to be paying attention.”

Compiled by AI Trends Editor John P. Desmond