For the past few weeks, Google has been making news around the country regarding the accidents its cars have been in during this testing period. For a few weeks, everyone was reporting about the eleven accidents that Google’s self-driving cars were involved in. For example, this article states: “Chris Urmson, head of Google’s self-driving car program, said in a blog post that the accidents caused no injuries and only “light damage.” Google’s cars were not the cause of any of the incidents.” A few weeks later, a similar article came out regarding an almost-collision between two self-driving cars. This is, supposedly, “raising concerns over the technology” and “there’s concern about handing control of heavy, fast-moving objects like cars to computers.”
I’m happy to see that Google (and other driverless car technology developers) will continue to have to report on accidents; however, there is a risk in sharing that information with the general public. It’s important that public safety is maintained and the industry learns from the hiccups along the way. They need to fail in order to learn and our society needs to acknowledge that. My big worry is that a few accidents that are due to the driverless car (which, supposedly, hasn’t happened yet) will significantly impede our society’s acceptance of the new technology. I know the media enjoys a good story; however, I hope the reporting around future driverless car accidents finds a balance between public information sharing and respect for the technology development/testing process.
I believe a Washington Post article said it best: “Last year, an estimated 32,675 people died in motor vehicle traffic crashes in the U.S., according to the National Highway Traffic Safety Administration. No robots were behind the wheel.”
I’m struggling to think of any mode of transportation we use today whose safety record wasn’t much worse in its formative years than when mainstream. But that isn’t to say that we should lower our expectations or standards. In this case it’s helpful to put the numbers into perspective. The 2013 NHTSA property-damage only rate of 0.3 crashes per million miles traveled is a national average, pulled from Table 3 of their Traffic Safety Facts 2013.
That report doesn’t break break down the rates between urban and rural areas. They do publish a separate report on that, but only for fatal crashes. The fatal rates are generally higher in rural areas, but not always. Numbers on property-only crashes are much harder to chase down. I cannot quickly find solid numbers, but from earlier work remember that urban damage-only rates were three times higher than in rural areas. So if NHTSA published urban versus rural property damage rates the former would probably be around 0.6 to 0.8 crashes per million vehicle miles traveled. Thus, the rate of 0.65 for the 11 AV crashes reported in the WSJ blog you’ve linked to falls within the typical range, given that Google’s test drives have been in the Bay Area.
Or does it? There are at least three reasons why the asserted AV crash rate is misleading. It includes any crash involving an AV, irrespective of whether the driver or machine had control of the vehicle at the time. Arguably crashes involving the former should not be included in the rate if you’re comparing the two. Secondly, because they are experimental vehicles Google and Audi have to report all crashes, irrespective of the level of damage, or lack of it. In this case Google disclosed the numbers for PR reasons. Traffic cops I know say that many crashes with minor or inconsequential damage are never reported, especially if they are single-vehicle accidents. If those were included the NHTSA numbers would increase to reflect them. Maybe not much, but not zero, either. Finally, we’re dealing with small numbers here, covering the first six years of a rapidly advancing (and improving) technology. The more informative numbers might be from when Google cars were isolated from general traffic (zero), or when it goes mainstream.