News of a fatal crash involving a prototype self-driving Uber car and a pedestrian recently made tech headlines across the world. The incident, which took place in Arizona in March this year, marks the first casualty from an autonomous vehicle.
The car, which was carrying a test driver, struck a pedestrian, who later died from injuries sustained in the crash. Sources allege that the test driver was looking down at the time of impact, though a dash camera shows that there was little time to react to the situation. The pedestrian crossed suddenly into the vehicle’s path.
In response to the crash, the Arizona Governor formally suspended Uber’s privileges for autonomous vehicle testing, privileges it has enjoyed in the state since 2016. He referred to the incident as ‘an unquestionable failure to comply’ with the objective of public safety, adding that video footage of the incident was ‘disturbing and alarming.’
Both the test driver and the Tempe Police have issued statements on the crash. Controversially, the Police Chief announced that ‘Uber would likely not be at fault’ for Herzberg’s death, stirring up backlash from Safe Streets advocates. Uber associates declined to comment on the incident.
Who is responsible for the behavior of autonomous technology? Although the Arizona governor announced rules that assign responsibility for autonomous vehicle crashes to private corporations, the mechanics of such responsibility are nebulous. How do we determine which individuals are most at fault for an incident? Do software and hardware developers bear more responsibility than Corporate executives? What is the legal agency of a supervisor or test driver?
Auto crashes are one of the leading causes of death in the United States, with over 40,000 motorists dying on American roads last year. Yet when a single casualty occurs at the hands of autonomous technology that can drastically improve our road safety, officials suspend testing. The level of societal understanding of road casualties stands only when the incidents are purely human. When software and hardware designed to simulate human judgment join the mix, lawmakers, officials, and advocacy groups adopt a zero-tolerance policy.
The pedestrian’s death is deeply saddening and a blot on autonomous technology’s supposedly infallible record. But from a utilitarian standpoint, it is fundamentally illogical to halt the progress of autonomous technology due solely to an isolated incident. In the long-term, autonomy stands to save millions of unsuspecting lives, and the technology needs millions of miles of testing to realise its full humanitarian potential. We owe it to future generations to acknowledge the shortcomings of the current state of affairs before blazing ahead with what may well be one of the most important technological developments of the modern age.
There still exists a desperate need to adjust the legal and social context around autonomous driving. Among technologist Isaac Asimov’s “Three Laws of Robotics,” a treatise conceived in 1942 to govern man’s relationship to sentient machines, is a call to specify responsibility for any errant decision that a given technology makes. Before launching any more testing, the private and public sectors must coordinate on who will be responsible for subsequent casualties, thereby defining the full scope of risks associated with entering the autonomy space.
The media tends to emphasise technology’s disregard for human rights, and though its views are often valid in the short-run, they are myopic. While there is considerable uncertainty as to what will constitute the future of technology, we can confidently say that its ultimate goal is to enable humans to live healthier, more productive, more enjoyable lives.
Autonomous technology is no different, and lawmakers would do well to remember that.
Frank Kosarek is a Public Policy and Economics student at Duke University in Durham, NC and an intern at DiploFoundation.