Driverless Vehicle Ethics – Should We Care?

I think I’ve avoided this topic long enough. The types of questions the industry is often faced with are: How will the vehicle decide who to kill: the driver or other people? And will the vehicle have knowledge to make the decision about which people’s lives are more valuable? Should the vehicle aim for the cyclist with a helmet (who is protected) instead of the cyclist without one? To date, Germany is the only government agency that has taken a stance on this topic. As stated in this article, the country’s transportation minister outlined three rules as a starting point for future laws:

  1. Property damage always takes precedence over personal injury.
  2. There must be no classification of people, for example, based on their size, age and the like.
  3. If anything goes wrong, the manufacturer is liable.

Interestingly, the auto makers and technology developers are pretty quiet on this issue. I’m not surprised. I believe the media and the academics are making this into a much larger issue than it actually is (see some examples of the hype here, here, and here). When was the last time when you were driving that you were faced with a decision to kill yourself versus someone else? And, if you are in the very small group faced with that situation, how did you make the decision? Was there even time to make a decision?

MIT has developed a Moral Machine to gather a “human perspective on moral decisions made by machine intelligence, such as self-driving cars.” The reality is that the developers working on driverless vehicles are refining the technology to deal with pedestrians, cyclists, bad weather, poor pavement markings, and construction sites…. Despite these challenges being every day occurrences, our society has put a lot of emphasis on the moral decisions….is that necessary?

About Lauren Isaac

Lauren Isaac is the Director of Business Initiatives for the North American operation of EasyMile. Easymile provides electric, driverless shuttles that are designed to cover short distances in multi-use environments. Prior to working at EasyMile, Lauren worked at WSP where she was involved in various projects involving advanced technologies that can improve mobility in cities. Lauren wrote a guide titled “Driving Towards Driverless: A Guide for Government Agencies” regarding how local and regional governments should respond to autonomous vehicles in the short, medium, and long term. In addition, Lauren maintains the blog, “Driving Towards Driverless”, and has presented on this topic at more than 75 industry conferences. She recently did a TEDx Talk, and has been published in Forbes and the Chicago Tribune among other publications.
This entry was posted in Driverless Car Impacts, Uncategorized and tagged , , , , , , . Bookmark the permalink.

2 Responses to Driverless Vehicle Ethics – Should We Care?

  1. Shelvacu says:

    No, I don’t think it is neccesary at all. The number of cases where a car is in such a situation where bodily injury is unavoidable *and* it can choose between bodily injury of one person or another will be vanishingly small.

    However, I certainly agree with generally preferring property damage over human damage. I suppose I could construct an elaborate fantasy (destroying a space elevator and all schematics-type stuff for it vs breaking someone’s arm) where I would prefer the bodily harm, even if it was me.


  2. Byron says:

    I know this is a late response, but I’m enjoying the blog and this one caught my eye… my dissertation is on this topic. I think its easy to brush off the extreme thought provoking scenarios as just though experiments (ie. trolley problem), but the ethics behind the decisions is something that must be dealt with. We aren’t dealing with artificial intelligence, or computers with morals, so autonomous car decisions must come from programming. Questions inevitably come in the aftermath of a horrible accident, and society will expect a reasonable answer. To be sure the family making funeral arrangements will want one.
    The reality is that computers don’t have instinct like humans. Therefore every move made is based on an algorithm that tells it what to do given a set of contextual facts. In the event of an accident where harm will come to someone, the determination of how that process happens has to be considered important.
    Great blog!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s