Comment by micromacrofoot
20 hours ago
> Waymo is held to a significantly higher standard than human drivers.
They have to be, as a machine can not be held accountable for a decision.
20 hours ago
> Waymo is held to a significantly higher standard than human drivers.
They have to be, as a machine can not be held accountable for a decision.
Slowing the adoption of much-safer-than-humans robotaxis, for whatever reason, has a price measured in lives. If you think that the principle you've just stated is worth all those additional dead people, okay; but you should at least be aware of the price.
Failure to acknowledge the existence of tradeoffs tends to lead to people making really lousy trades, in the same way that running around with your eyes closed tends to result in running into walls and tripping over unseen furniture.
But we have no way of knowing whether robotaxis are safer. See, for example, the arguments raised here: https://www.bloomberg.com/news/features/2026-01-06/are-auton...
We can't blindly trust Waymo's PR releases or apples-to-oranges comparisons. That's why the bar is higher.
You may not have any way of knowing but the rest of society has developed all sorts of systems of knowing. "Scientific method", "Bayesian reasoning", etc. or start with the Greek philosophy classics.
Waymo is not a machine, it is a corporation, and corporations can, in fact be held accountable for decisions (and, perhaps more to the point, for defects in goods they manufacture, sell, distribute, and/or use to provide services.)
The promise of self-driving cars being safer than human drivers is also kind of the whole selling point of the technology.
What? No? The main selling point is eliminating costs for a human driver (by enabling people to safely do other things from their car, like answering emails or doomscrolling, or via robotaxis).
Sure, but the companies building them are just shoving billions of dollars into their ears so they don't have to answer "who's responsible when it kills someone?"
The question of responsibility, while philosophically interesting wrt. increasingly autonomous machines, is not going to be an issue in practice. We'll end up dealing with it like we always do with multi-party responsibility in complex systems: regulators setting safety standards and outlining types and structures of liabiliy, contracts shifting the liability around, and lots and lots of insurance.
In fact, if you substitute "company providing self-driving solution (integrated software + hardware)" for "company renting out commercial drivers" (or machine operators), then self-driving cars already fit well into existing legal framework. The way I see it, the only change self-driving cars introduce here is that there is no individual operator we could blame for the accident, no specific human we could heavily fine or jail, and then feel good about ourselves because we've issued retributive justice and everything is whole now. Everything else has already long been worked out.
> They have to be, as a machine can not be held accountable for a decision
This logic applies equally to all cars, which are machines. Waymo has its decision makers one more step removed than human drivers. But it’s not a good axiom to base any theory of liability on.