The Ethical and Inevitable Challenges of Autonomous Vehicles



There were 6,296,000 reported automobile crashes in the U.S. in 2015 (the last year for which data was available from the National Highway Traffic Safety Administration), resulting in 2,443,000 injuries and 35,485 fatalities. The advent of autonomous vehicles makes the goal of drastically reducing such injuries and fatalities achievable in the coming years; in fact, at least one study has suggested that self-driving cars could eliminate as many as 90% of traffic fatalities by the middle of the century.

In order for this technological advancement to happen, regulators and society as a whole will have to adjust their approach to vehicle safety. There is no question that autonomous vehicle technology is not yet at the point where completely driverless cars should be placed on the open road, but the crucial question that will shape the future of the industry is “how safe is safe enough?”

If a self-driving car is safer than the average human being, that should be “safe enough”—for now. In order for the industry to survive, the starting standard cannot be higher than that, and from a rational utilitarian perspective, it should not be. If the standards are based off of questions like “could this accident have been avoided?” the industry will be destroyed in a deluge of litigation and never be able to get off the ground. To allow self-driving cars to gain a meaningful foothold in terms of market share of vehicles, the playing field has to be level; computers cannot be held to a higher standard simply because society thinks they should.

Similarly, the logic of autonomous vehicles cannot be compared to the thought process or instincts of human drivers. Whether a human driver would have made the same “mistake” as a computer-driven vehicle should not matter in the grand scheme of autonomous vehicle regulation when the end goal is a reduction in total vehicular injuries and fatalities. The logic of a computer and the thought process of a human are not comparable, and there will inevitably be accidents caused by failures in a computer to recognize a condition or event that a human driver would be able to process and react to. The inability to understand why a self-driving vehicle chose to make a decision that a human driver would instinctively recognize as incorrect will be a driving psychological factor in early apprehension toward autonomous vehicles, but if manufacturers and regulators keep the goal of reducing total vehicular fatalities in focus, this fear can be overcome.

Autonomous vehicle technology has the potential to be significantly safer than human drivers, and a level playing field is only necessary to prevent the new technology from being suppressed by excessive litigation. Once autonomous vehicles achieve stability in the industry, manufacturers can and should be held to a gradually higher standard.

Manufacturers and regulators will have some difficult challenges to face as autonomous vehicle technology develops, and serious ethical questions—such as what a self-driving car is programmed to do in a no-win situation where a fatality is unavoidable— need to be addressed. The task of programming and regulating autonomous vehicles in a way that adequately protects humans is difficult but achievable; in order for the task to be feasible, regulators cannot hold computerdriven vehicles to a higher standard than human drivers. The “mistakes” made by autonomous vehicles will undoubtedly be scary to many at first, but such vehicles will drastically reduce vehicular fatalities when the technology progresses to its greatest potential.

The viewpoints and opinions expressed in the Perspectives Op-Ed column are those of the author and do not necessarily reflect the viewpoints of the Gazette. We encourage readers to submit a Letter to the Editor for publication consideration: