By: Gadi Perl.
The fatal car accident in Tempe Arizona on March 18, 2018, in which an autonomous Uber vehicle killed a pedestrian crossing the road, instantly became world news. Many were curious to learn who was to blame, and video footage of the incident provoked much debate regarding potential mechanical fail and/or human error on the part of the supervising driver.
Since the liability or technical issues regarding autonomous cars have already been widely discussed, I would like to focus on two issues that may have been somewhat overlooked during the debate. The first is the amount of documentation provided by the autonomous vehicle regarding the accident. The second is the almost predictable failure of the human driver to pay attention and attempt to prevent the accident.
My first remarks concern the potential implications for the right to privacy caused by the amount of documentation generated from cameras mounted on the vehicle that was involved in the accident. Let’s consider the amount of cameras installed on the Uber autonomous car:
Following the accident, two videos were released, one showing the accident from the driver’s perspective and the other recording the driver’s response. It is therefore clear that at least two cameras were installed on the vehicle; an exterior camera documenting everything in front of the car (hereinafter the “front camera,”) and an interior camera, aimed at the driver.
While there is no doubt regarding the technical need for a front camera, since this is required for navigational purposes, the need is less clear regarding the interior camera. With no apparent technical need for an interior camera, one might speculate that the reason for installing a camera aimed at the driver has less to do with engineers and more to do with lawyers.
In an executive order of April 2017 promoting for the testing of autonomous cars in the state of Arizona, Governor Ducey required all companies to monitor their experimental autonomous vehicles. With similar regulations in the California Code of Regulations relevant to the Department of Motor Vehicles, state lawmakers have made sure that companies provide comprehensive documentation if any accident happened.
On the one hand, a requirement for comprehensive documentation is understandable considering the application of an experimental new technology on public roads with the potential to risk lives. On the other hand, even with the need for safety, little thought has been given to exactly what should and should not be documented.
It was not surprising to see telemetry data regarding locations, speed, and the direction of the car’s movement emerge. Such data are required for the operation and testing of the car and must be provided if we are to investigate what had happened. Nevertheless, I found it troubling to see the footage of the driver being published worldwide. It wasn’t clear to me why the driver had been videoed in the first place.
It seems to me, that lawmakers and companies alike had focused excessively on risk minimization during the test runs, without giving enough thought to other considerations, such as privacy. Installing cameras to oversee passengers is a severe invasion of privacy – and in this case not fully warranted. If there was a need to examine human responses, a camera pointed at the driver’s face was very inefficient, giving us a partial picture at best. Even if one could claim that there is a need for a driver-facing camera, publication of the video was not required and should have been forbidden.
Will our future cars document us sitting there watching the road? Will they document our conversations with our loved one? Will they save footage when we’re crying, or simply when we’re having a bad day? I personally felt bad for the driver who became an unwilling celebrity.
It is important that when autonomous cars become available to the public, special attention be given regarding the preservation of our right to privacy, whether be it as passengers on board an autonomous vehicle, or even as pedestrians randomly filmed by passing autonomous cars. We should avoid a future when every passing car may lead to an uploaded YouTube video making us famous without our consent.
My second point goes to the risks of having an autonomous car, but still trying to have a person assume responsibility in extreme cases.
It is clear from the video published on YouTube that during the moments leading up to the accident, the Uber employee did not have her eyes on the road, and she seemed relatively distracted. Only a second before the final crash does the driver realize what is about to happen, after it is too late to react.
I believe that the failure of the supervising driver to prevent the accident was almost inevitable. I was not surprised that the supervising driver, who most probably had nothing to do 99% of the time, was not vigilant in the moments leading to the accident. The constant vigilance required from drivers to prevent accidents is difficult to maintain in human-controlled vehicles, and impossible in autonomous vehicles. This test driver was unemployed most of her driving experience with the autonomous car in charge for navigation. It is unrealistic to think that she could have stayed alert for such a long period doing nothing.
This incident emphasized to me that when autonomous cars become commercially available the question regarding the identity of the driver should be a dichotomous one. Either a human will be driving or a machine – not both. Any attempt to maintain human supervision will most likely prove ineffective, and drivers will not be able to prevent accidents. This is a legal statement and not just a practical one. Liability to prevent accidents cannot be attributed to the human supervisor. It simply won’t work.
Yang and Coughlin (2014) discuss self-driving vehicles and their challenges for ageing drivers, who tend to respond more slowly, to suffer from loss of hearing, and to have lower acceptance rates for new technologies. Therefore, apart from being inefficient, giving responsibility to human drivers to supervise autonomous cars may also have discriminatory aspects. Elderly drivers will be less equipped to respond quickly and less adapt at understanding the data from the autonomous cars.
There are many more issues which researchers are trying to examine as self-driving vehicles become a reality. Liability for harm, cybersecuritykeeping cars safe and moral decisions in life and death situations are just examples. Real life accidents help us understand that what we write in articles, and opinions we debate in various forums, have an impact on people’s lives. Finding solutions beforehand can make technology safer and promote equality.
 Colonna, Kyle, Autonomous Cars and Tort Liability (Fall 2013). Case Western Reserve Journal of Law, Technology & the Internet, Vol. 4, No. 4, 2012. Available at SSRN: https://ssrn.com/abstract=2325879 or http://dx.doi.org/10.2139/ssrn.2325879
 NHTSA - Automated Vehicles for Safety, https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety.
 Ethics Commission's complete report on automated and connected driving, German Federal Ministry of Transport and Digital Infrastructure, 28.8.2017, available at: https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html?nn=187598.