I think I’ve avoided this topic long enough. The types of questions the industry is often faced with are: How will the vehicle decide who to kill: the driver or other people? And will the vehicle have knowledge to make the decision about which people’s lives are more valuable? Should the vehicle aim for the cyclist with a helmet (who is protected) instead of the cyclist without one? To date, Germany is the only government agency that has taken a stance on this topic. As stated in this article, the country’s transportation minister outlined three rules as a starting point for future laws:
- Property damage always takes precedence over personal injury.
- There must be no classification of people, for example, based on their size, age and the like.
- If anything goes wrong, the manufacturer is liable.
Interestingly, the auto makers and technology developers are pretty quiet on this issue. I’m not surprised. I believe the media and the academics are making this into a much larger issue than it actually is (see some examples of the hype here, here, and here). When was the last time when you were driving that you were faced with a decision to kill yourself versus someone else? And, if you are in the very small group faced with that situation, how did you make the decision? Was there even time to make a decision?
MIT has developed a Moral Machine to gather a “human perspective on moral decisions made by machine intelligence, such as self-driving cars.” The reality is that the developers working on driverless vehicles are refining the technology to deal with pedestrians, cyclists, bad weather, poor pavement markings, and construction sites…. Despite these challenges being every day occurrences, our society has put a lot of emphasis on the moral decisions….is that necessary?
No, I don’t think it is neccesary at all. The number of cases where a car is in such a situation where bodily injury is unavoidable *and* it can choose between bodily injury of one person or another will be vanishingly small.
However, I certainly agree with generally preferring property damage over human damage. I suppose I could construct an elaborate fantasy (destroying a space elevator and all schematics-type stuff for it vs breaking someone’s arm) where I would prefer the bodily harm, even if it was me.
LikeLike
I know this is a late response, but I’m enjoying the blog and this one caught my eye… my dissertation is on this topic. I think its easy to brush off the extreme thought provoking scenarios as just though experiments (ie. trolley problem), but the ethics behind the decisions is something that must be dealt with. We aren’t dealing with artificial intelligence, or computers with morals, so autonomous car decisions must come from programming. Questions inevitably come in the aftermath of a horrible accident, and society will expect a reasonable answer. To be sure the family making funeral arrangements will want one.
The reality is that computers don’t have instinct like humans. Therefore every move made is based on an algorithm that tells it what to do given a set of contextual facts. In the event of an accident where harm will come to someone, the determination of how that process happens has to be considered important.
Great blog!
LikeLike