Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Uber Self-Driving Car Which Killed a Woman Wasn’t Programmed to Account for Jaywalkers

November 12, 2019

The Verge reported on November 6th that new evidence has emerged showing that Uber did not have a formal safety plan in place at the time when one of its self-driving cars killed a woman in Tempe, Arizona in March 2018. The evidence comes from new documents released by the National Traffic Safety Board last week. Uber's autonomous vehicles also were not programmed to react to people who were jaywalking. We also learned that the company was involved in over three dozen crashes prior to the one that killed 49-year-old Elaine Herzberg last year as she walked her bike across a street.

The new evidence, consisting of over 400 documents may lead to a contentious hearing later this month when the NTSB will decide the probable causes of the crash. The documents paint a picture of a company where safety lapses, poor staffing decisions, and technical miscalculations were all involved in Herzberg's death. Many companies pursuing self-driving cars were trying to get them on the road as quickly as possible. Her death may have slowed them down.

Uber is likely to avoid any serious repercussions, as the local prosecutor on the case has said she is declining to press charges. I am not sure what motivated that decision.

According to NTSB, the software installed in Uber's vehicles that helps it detect and classify other objects "did not include a consideration for jaywalking pedestrians." The system did detect Herzberg who was walking her bike across North Mill Road outside the crosswalk a few minutes before 10PM. But it classified her as an "other object," not a person.

"As the [automated driving system] changed the classification of the pedestrian several times—alternating between vehicle, bicycle, and an other— the system was unable to correctly predict the path of the detected object," the board's report states.

The NTSB investigation revealed that Uber only built in a one-second delay between crash detection and action to avoid false positives.

Uber's vehicle detected Herzberg 5.6 seconds before impact, but it failed to implement braking because it kept misclassifying her. Each time the automated driving system came up with a new classification, it had to calculate a new trajectory for the object. A one-second "action suppression" was supposed to hand control back to the operator for manual braking. But if the operator failed to deal with the situation in that one-second interval — which, in this case, she did — then the system is designed to provide an auditory warning that collision is imminent and start a gradual (but not maximum) braking process.

In the months after the crash, Uber has dropped action suppression and now applies maximum emergency braking to prevent crashes. In this new setup, Uber says the vehicle would have braked four seconds early, implying that it would have avoided killing Herzberg.

Between September 2016 and March 2018, Uber's autonomous vehicles were involved in 37 "crashes and incidents" while in autonomous mode, the board reports. But Uber's cars were the "striking vehicle" in only two of those crashes; the majority involved another vehicle striking the autonomous car (33 such incidents; 25 of them were rear-end crashes, and in eight crashes, Uber's test vehicle was sideswiped by another vehicle).

There was a lack of adequate safety planning by Uber in advance of the fatal crash, the board states. Uber's Advanced Technologies Group (ATG) had a technical system safety team, "but did not have a standalone operational safety division or safety manager," the board states. "Additionally, ATG did not have a formal safety plan, a standardized operations procedure (SOP) or guiding document for safety."

Uber argues that it did have safety policies, procedures, and engineering practices that, taken together, could be considered a safety plan, but it acknowledges not having a formal plan in place at the time of the crash. There is no federal rule requiring AV operators to have or submit safety plans to the government; there are only voluntary guidelines. Uber released its first safety report in November 2018.

For me, this crash is instructive about what was clearly poorly designed artificial intelligence – how is it remotely possible that a self-driving car doesn't anticipate the actions of jaywalkers? One shudders to think of other scenarios that may have been overlooked. And having one second for a human operator to correct an incorrect designation of a person when traveling at 45 miles per hour makes no sense at all. I will be interested in monitoring the NTSB hearing later this month.

Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225|Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson