Autonomous Vehicles Are Ready to Save Us, Are We Ready to Let Them?

Autonomous Vehicles and Their Ethical and Legal Implications

Advances in robot and automotive technology have tremendous capacity to change the world we live in, and that technology being used for self driving cars is at the center of legal and ethical questions.  Self driving cars, which will also be referred to as automated vehicles and autonomous vehicles,  have an impact that will go far beyond the way we commute.  Entire industries would vanish and completely new ones inaugurate. 

Vanity Fair correspondent Nick Bilton summarized his opinion of self driving cars during an interview on the podcast RadioLab on the  episode, Driverless Dilemma, by saying, “It’s going to have a larger effect on society than any technology that has ever been created in the history of mankind...in the next ten years there will be 20-50 million jobs that will vanish...there are entire industries that are built around just cars.”  The ripple effect goes much further than those who make a living by driving or manufacturing cars.  He continues to explain just how wide the scope of this technology is by saying, “ ...for example if you are not driving the car why do you need insurance?  There are no parking tickets because your driverless car knows where it can and cannot park…driverless truckers never need to stop at rest stops...The whole concept of what a car is will change...if I wanted to workout I call a driverless gym car...office car, I pick someone up in a car and we have a meeting in an office car…”  With the technology for self driving cars already upon us and the widespread application of it approaching, there are several questions society must ask itself through legal and ethical lenses.

When examining the legal and ethical issues posed by self driving cars it’s important to understand the definition of smart technology.  Technology that mines relevant data fields in order to identify and react to patterns it finds is the defining characteristic of smart technology. In essence, computers are able to analyze input to make conclusions based on their programming and deliver an output based on the analyzation of the collected data.  Self driving cars combine the information computer scientists program into them, and the data they collect while operating, and surmise a solution to the problem in front of them.

This gray area of autonomy really asks us to dive into the question not only of what is artificial intelligence, but the question of who is artificial intelligence.  While this may sound like something of science fiction, it’s paramount in deciding the legal ramifications of self driving cars.  By most philosophies and metrics, since robots do not have awareness of their freedoms and do not possess a conscious they cannot be held responsible or punished by law.

In an attempt to answer the question of who artificial intelligence is, the National Highway Traffic Safety Administration has created five tiers of autonomous vehicles which was developed in conjunction with the Society of Automotive Engineers International.  

The tiers are Level 0, the human driver does everything.  Level 1, automation can sometime assists to conduct some parts of driving tasks, such as a backup camera.  Level 2 automation can actually conduct some driving tasks while under human monitoring such as park assist.  Level 3 automated system, of which they are currently none available to the consumer public, can actually conduct and monitor some instances but human driver must be ready such as automated cruise control with highway driving.  Level 4 automated system can conduct driving task and monitor driving environment, but only under certain conditions such as driving autonomously as long as it’s not in harsh winter conditions (which has been a small roadblock in development).  The final level, level 5, allows the automated system to perform all driving tasks under all conditions, which is an entirely autonomous vehicle requiring no human assistance or even presence regardless of the surrounding environment or situation.

In the United States while there are a few federal regulations about the day to day operation of self driving cars, they mostly adhere to the same rules that human driven vehicles must obliged.  The detailed legality and interpretation however is being worked out at the state level.  Most of the state level discussion revolves around who is allowed to use self driving cars, licensing, and legality of use.  For example in California you will need a special driver's license to use a self driving car, while in Florida you do not.

Who is liable when something goes wrong however is still a bit of a legal frontier.  While some parties involved have been exempt, such as original manufacturers whose product has autonomous technology installed by a third party, when manufactured autonomous vehicles collide the debate is wide open.  In Florida, California, and Nevada, the person who is responsible for the technology to engage is considered the operator of the vehicle.  Legislation, and society alike are uncertain with how to handle the current inability of autonomous vehicles having to choose the lesser of two evils.  For example, choosing between hurting a family of four inside the car or running over 10 criminals on the sidewalk.

In 2018, a woman was fatally struck by an autonomous car going 5 miles under the speed limit in Arizona.  Despite there being a person sitting inside the car, the police deemed the manufacturer to be liable.  While legislation and rules exist for how self driving cars are tested, how they are to be operated, and the standards at which they can be released, there is very little legal doctrine available in regards to who is at fault when an automated vehicle makes a mistake.  

There may be some parallels in the surgical realm where doctor assisted artificial intelligence have had a number of cases, but even those don’t seem to have the scale of impact driverless cars do.  Currently, the framework being laid out seems to be leaning towards manufacturers being the ones ultimately responsible, especially considering the fact that eventually these vehicles may not have anyone inside of them.  Right now the legislation available mostly focuses on how we test self driving cars, who can own and operate them, as well as the protocol for manufacturing.  This makes the current legal situation a little less complicated than when real percentages of people are using autonomous vehicles.

Beyond the legal aspects of liability and public safety, there are several more powerful ethical considerations when determining the use and application of autonomous vehicles.

There is an ethical debate to be had on not allowing or fast tracking autonomous vehicles on the road.  Right now Google reports that their driverless car has traveled over a million miles, or about 75 years of an average American driver, without any accidents.  The nonprofit organization Eno Center for Transportation recently did a study with astonishing results.  Their researched indicated over 21,000 lives would be saved and crashes would decrease by over 4 million if 90% of cars on the road were automated.  When looked at through Kantonion views, free market ethics, and utilitarianism the answer to self driving cars can be conflicting.

The utilitarianism view, assuming all research is accurate, would tell us that we should make every single car autonomous as soon as we possibly can.  Maybe even immediately.  With the predictions of lives saved, accidents avoided, and environmental benefits being astronomically beneficial, it is worth the bumps, bruises, and even deaths that would occur while working out the kinks.  The greater good of all activities surrounding vehicle improvement is overwhelming and while it is important to take into consideration the livelihood of people who make a living, the good still outweighs the bad and we should launch self driving cars as soon as possible.  Still, with 95% of car deaths being a result of human error, the evidence is overwhelming that automated cars will undoubtedly be safer, if they aren’t already.

A free market ethics view of self driving cars is a little less clear.  Free market ethics would tell us it is up to car manufacturers and all those who stand to benefit financially from autonomous vehicles.  This implementation is little less gung-ho compared to utilitarian views.  Free market ethics would suggest self driving cars be produced precisely at the speed in which they can be lucrative.  Part of self driving cars being lucrative is their safety, efficiency, and cost.  Currently, the cost of making Level 4 & 5 autonomous cars is about $320,000, but experts predict that by as early as 2025 a self driving car will be between $7,000 and $10,000 more than the regular retail price of the car. Still, that is a substantial amount of money and the economy would dictate the pursuit of autonomous cars from an ethical standpoint when looking through a free market lense.

Kantonian ethics will bring you to several conflicting decisions.  Raj Rajkumar, a professor at Carnegie Mellon, discusses what he says is the inevitability of the sensors of the car going from the lights, lasers, and sensors they have now to being able to decide not only what is around them, but who.  When cars are able to read who is around them and talk to each other, what are they going to decide?  If they have to choose one, do the cars decide to kill the billionaire philanthropist or the bus full of infants with spina bifida?  A boy or a girl?  It becomes clear that deontological rules must be bent when viewing autonomous vehicles and what they are capable of.

Bill Ford Jr. says, “can you imagine if (Ford) has one algorithm, and Toyota had another and General Motors has another,” in regard to a call to action for a standardized set of rules for these cars to follow.  Regardless of the regulations and liabilities we decide on, they almost universally need to be agreed upon so that German cars, Japanese Cars, and so forth don’t make different decisions based on a different code of ethics.  Dr. Rajkumar even prophesized a change to the Geneva convention.

We may be a few decades off from a world where we no longer learn how to drive and autonomous vehicles dominate the road, but their impact is already being felt.  From the legislation being passed to how manufacturers’ research and develop, it is clear that the question being posed is when and not if.  Liabilities will eventually be on manufacturers but until then it will vary based on the circumstances.  Ethically, the questions become much more complex because it involves how quickly we should implement self driving cars, and more importantly how we should program them to make decisions.









Work Cited

1. WNYC Studios (2017). Driverless Dilemma. [podcast] RadioLab. Available at: https://www.wnycstudios.org/story/driverless-dilemma [Accessed 19 May 2019].

2. Sabine Gless; Emily Silverman; Thomas Weigend, If Robots Cause Harm, Who Is to Blame: Self-Driving Cars and Criminal Liability, 19 New Crim. L. Rev. 412, 436 (2016)

3. Agnes B. Juhasz, The Regulatory Framework and Models of Self-Driving Cars, 52 Zbornik Radova 1371, 1392 (2018)

4. Angela Foster, Is the Legal Community Ready for Self-Driving Cars, 41 Litig. News 26, 27 (2016)

5. Jeffrey R. Zohn, When Robots Attack: How Should the Law Handle Self-Driving Cars That Cause Damages, 2015 U. Ill. J.L. Tech. & Pol'y 461, 486 (2015)

6. Madeline Roe, Who's Driving That Car: An Analysis of Regulatory and Potential Liability Frameworks for Driverless Cars, 60 B.C. L. Rev. 317, 348 (2019)

7. Kenneth S. Abraham; Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127, 172 (2019)

Previous
Previous

High Fives Over Hate Crimes: Revisiting The Charleston Emmanuel AME Shooting