Nice write-up by Nicholas Deleon in Why Google’s Self-Driving Car Crash Doesn’t Change Anything.
As I told him, I think it’s wrong to expect robot cars to be 100% safe; so having a Google self-driving car in a fender-bender is of no real significance. There are a lot of issues with self-driving cars, but their failure to be perfect is not in my opinion one of them. Indeed, until all cars on the road are controlled by compatible (note I said compatible, not centrally controlled!) systems, the interaction between, excuse the term, legacy cars and robotic cars — not to mention pedestrians, stray animals, and debris on the road — means accidents will happen.
As I told Delon, one issue is whether the robot car is (provably) safer than the average human. Another issue is who should pay when the robot car is at fault, wholly or partly, for the accidents. The law has not determined how to allocate responsibility between the passenger, the owner, the programmer, and the manufacturer. We could treat this as a straight-forward problem of product liability law, or we could be more creative. I’m thinking on it.