Another level of human error

Web Exclusive Editorial/Commentary News March 23, 2018
Printer-friendly version
Brian W. Budzynski

On Sunday, March 18, a woman was struck and killed by an autonomous Uber vehicle, at the wheel of which the human safety driver was engaged not with the road or operation of the vehicle but with his phone. This is believed to be the first AV-related pedestrian death in the U.S.

 

It is difficult, in my capacity reporting on the transportation industry and thus monitoring the development of autonomous driving technology, to know where to lay the blame for this horrible event, nor is it any easier to find solid ground on which to stake a position with regard to how this will or should affect AV deployment on public roadways.

 

The dashcam and interior cab videos released online are jarring, both cutting off a split-second before impact. My initial snap reaction was to rage against the safety driver. Surely he should not have been allowed to be in the vehicle at all if he were in any way unclear about what his responsibilities entailed, primary of which was to keep his eyes and attention on the road, autonomous mode notwithstanding. These vehicles are in the pilot phase, after all, and this is not—nowhere near, frankly—fully optimized.

 

When that snap judgment began to cool, I considered the other factors at play here. The state of Arizona is, relative to the rest of the U.S., quite progressive in its allowance of AVs on its public roadways. The legislature and DOT in this regard clearly sees themselves as harbingers of crucial developmental change in transportation, a wellspring of technological development and tolerance of its application. My question is whether it is wise to have these vehicles on public roadways at all. The general infrastructure, on which AVs depend for positioning information, leaves much to be desired, and the onboard systems themselves are not exactly old tech; this is brand new stuff, when you consider the full history of motorized transportation. While it is impossible to say that such an accident would not have happened in an ordinary driver-led vehicle (especially if the driver was more focused on his phone than on the road before him), the fact that this vehicle’s AV system was not advanced enough to adapt to the circumstance of a pedestrian in the roadway says much as to how far there is yet to go in making this type of technology appropriately developed for exposure to the general public.

 

I also considered Uber’s responsibility here. How well was that driver trained? He was being monitored within the cab by a camera, but that didn’t seem to matter much. The number of issues Uber has had, from criminals behind the wheel to questionable business practices, screams to me a company whose primary priority is not public safety and benefit but, above all, brand maintenance, public perception control, and profitability (even though Uber and Lyft and the like are fiscal sieves). In the case like this, the buck stops where? Does it stop on-scene, with the driver? With the company that employed him? With the state that apparently did not feel proper standards and oversight were required in plain ink before allowing these vehicles to roll around public roads? With the technology developer, for failing to be advanced enough that such a tragedy could have/should have been avoided?

 

In some ways this is murkier than the Tesla incident; in my opinion, when you call what is in essence merely an “assisted driving” mode that still requires—REQUIRES—the attention of the driver “AutoPilot” you’re asking for misinterpretation; you’re mischaracterizing your product and what it can and cannot do. In the case of the Uber incident, it is a reminder that self-driving technology is still in the experimental stage. Despite the much-vaunted talk of major gains in public safety through the pervasive development and use of driverless technology, so far I don’t know how much gain we’re really seeing. At the least, I don’t believe the technology itself is developed enough to warrant piloting on open public roadways.

 

If those with financial and intellectual-property skin in the AV game are truly dedicated to boosting safety and to not putting the donkey before the cart in order to get there, then what is perhaps best is to remove AV testing vehicles from public roadways until isolated region pilots can demonstrate, with a extreme high level of surety, that the technology on board can adapt to our unpredictable world.

 

One point of argument for rushing AVs out into the world as they have been is that human error is primarily responsible for nearly all traffic incidents, injuries and deaths. This is statistically true. But now we are dealing with another level of human error—the hubristic notion that perhaps there is an acceptable percentage of collateral damage we as a society can absorb in order to allow technology and vehicle manufacturers to grow their business. Because at this stage of development, it is really about money more than anything else, regardless of the lip service paid to any given company’s desperate desire to save lives. Trust me, I attend enough industry conferences to know, having seen so many startups plainly frothing at the corporate mouth to be absorbed into one of the big developers.

 

Technological progress in this area can continue to maintain its leaping pace without doing so on public roads. With the financial implications as massive as they are, and with the general global consensus being that AVs will be the mode of the future, it would be a worthwhile investment to create a testing facility, let’s say the size of a small national park, at which developers and testers can deliberately throw any and everything at test vehicles they can think of. They can do so on purpose and repeatedly. The death of Elaine Herzberg was too high a price to pay to learn that, as it stands, these systems are just not road-ready.

Overlay Init