Comment by alkonaut
17 hours ago
> ~17 deaths per 100K drivers per year. 40K+ fatalities overall.
I also think one needs to remember those are _abysmal_ numbers, so while the current discourse is US centric (because that's where the companies and their testing is) I don't think it can be representative for the risks of driving in general. Naturally, robotaxis will benefit from better infra outside the US (e.g. better separation of pedestrians) but it'll also have to clear a higher safety bar e.g. of fewer drunk drivers.
Also fun to calculate how this compounds over say 40 years. You get to about 1 in 150 drivers being involved in some kind of deathly accident. People are really bad at numbers and assessing risk.
It will also never get worse. This is the worst the algorithms from this point forward.
I am not sure. Self-driving is complex and involves the behavior of other, non-automated actors. This is not like a compression algorithm where things are easily testable and verifiable. If Waymos start behaving extra-oddly in school zones, it may lead to other accidents where drivers attempt to go around the "broken" Waymo and crash into it, other pedestrians, or other vehicles.
I know Tesla FSD is its own thing, but crowdsourced results show that FSD updates often increase the amount of disengagements (errors):
https://electrek.co/2025/03/23/tesla-full-self-driving-stagn...
And we haven't reached the point where people start walking straight into the paths of cars, either obliviously or defiantly. https://www.youtube.com/shorts/nVEDebSuEUs
1 reply →
Has this been true of other Google products? They never get worse?