Another day passes and another batch of articles have come out regarding the recent Tesla Autopilot accident. Last week’s blog by Steve Kuciemba highlighted the potential avoidance of this crash if connected vehicle technology had been involved. I have a few additional thoughts:
- Full autonomy (assuming it was a well-tested vehicle) could also have avoided this accident. As Chris Urmson of Google states in this article, “people were doing ridiculous things in the car” (referring to people’s response to partially automated vehicles). People assume a vehicle with some level of automation can handle a lot more than it’s actually equipped to do. This reflects people’s human nature to 1. Not have awareness/understanding of how partial automation works, and 2. Be lazy (and not focus on driving). Note: all of these news articles may be helping to address the lack of awareness issue!
- Exactly how much testing is necessary before a vehicle is allowed on the road? Automobile manufacturers would probably argue that years of testing is required before deploying new features to the public. This New York Times article states the following: “a Tesla executive said the Autopilot system had performed safely during tens of millions of miles of driving by consumers. “It’s not like we are starting to test this using our customers as guinea pigs,” he said.” What standards are required here and how can the government determine an appropriate threshold?
- As I’ve stated in an earlier blog post, I’ve always worried that one accident could slow (or even halt) the development and/or adoption of driverless vehicles. This article is one of many that suggests this concept. I think this would be a huge loss for society because the accidents caused by these “robo-cars” will likely be a fraction of what is currently caused by human drivers.
It looks like Tesla has no plans to disable the Autopilot feature (source). I wonder if that’s a good or bad thing…. Thoughts?