Comment by alkonaut
19 hours ago
And before the argument "Self driving is acceptable so long as the accident/risk is lower than with human drivers" can I please get that out of the way: No it's not. Self driving needs to be orders of magnitude safer for us to acknowledge it. If they're merely as safe or slightly safer than humans we will never accept it. Becase humans have a "skin in the game". If you drive drunk, at least you're likely to be in the accident, or have personal liability. We accept the risks with humans because those humans accept risk. Self driving abstracts the legal risk, and removes the physical risk.
I'm willing to accept robotaxis, and accidents in robotaxis, but there needs to be some solid figures showing they are way _way_ safer than human drivers.
I think those figures are already starting to accumulate. Incidents like this are rare enough that they are news worthy. Almost every minor incident involving Waymo, Tesla's FSD, and similar solutions gets a lot of press. This was a major incident with a happy end. Those are quite rare. The lethal ones even rarer.
As for more data, there is a chicken egg problem. A phased roll out of waymo over several years has revealed many potential issues but is also remarkable in the low number of incidents with fatalities. The benefit of a gradual approach is that it builds confidence over time.
Tesla has some ways to go here. Though arguably, with many hundreds of thousands of paying users, if it was really unsafe, there would be some numbers on that. Normal statistics in the US are measured in ~17 deaths per 100K drivers per year. 40K+ fatalities overall. FSD for all its faults and failings isn't killing dozens of people per years. Nor is Waymo. It's a bit of an apples and oranges comparison of course. But the bar for safety is pretty low as soon as you include human drivers.
Liability weighs higher for companies than safety. It's fine to them if people die, as long as they aren't liable. That's why the status quo is tolerated. Normalized for amounts of miles driven with and without autonomous, there's very little doubt that autonomous driving is already much safer. We can get more data at the price of more deaths by simply dragging out the testing phase.
Perfect is the enemy of good here. We can wait another few years (times ~40K deaths) or maybe allow technology to start lowering the amount of traffic deaths. Every year we wait means more deaths. Waiting here literally costs lives.
> ~17 deaths per 100K drivers per year. 40K+ fatalities overall.
I also think one needs to remember those are _abysmal_ numbers, so while the current discourse is US centric (because that's where the companies and their testing is) I don't think it can be representative for the risks of driving in general. Naturally, robotaxis will benefit from better infra outside the US (e.g. better separation of pedestrians) but it'll also have to clear a higher safety bar e.g. of fewer drunk drivers.
Also fun to calculate how this compounds over say 40 years. You get to about 1 in 150 drivers being involved in some kind of deathly accident. People are really bad at numbers and assessing risk.
It will also never get worse. This is the worst the algorithms from this point forward.
4 replies →
> I'm willing to accept robotaxis, and accidents in robotaxis, but there needs to be some solid figures showing they are way _way_ safer than human drivers.
Do you mean like this?
https://waymo.com/safety/impact/
Yes but ideally from some objective source.
Like this? https://waymo.com/blog/2024/12/new-swiss-re-study-waymo
2 replies →
If waymo is to be believed, they hit the kid at 6mph and estimated that a human driver at full attention would have hit the kid at 14 mph. The waymo was traveling 17mph. The situation of "kid running out between cars" will likley never be solved either, because even with sub nanosecond reaction time, the car's mass and tire's traction physically caps how fast a change in velocity can happen.
I don't think we will ever see the video, as any contact is overall viewed negatively by the general public, but for non-hyperbolic types it would probably be pretty impressive.
That doesn't mean it can't be solved. Don't drive faster than you can see. If you're driving 6 feet from a parked car, you can go slow enough to stop assuming a worst case of a sprinter waiting to leap out at every moment.
If we adopted that level of risk, we'd have 5mph speed limits on every street with parking. As a society, we've decided that's overly cautious.
3 replies →
Oh I have no problem believing that this particular situation would have been handled better by a human. I just want hard figures saying that (say) this happens 100x more rarely with robotaxis than human drivers.
> The situation of "kid running out between cars" will likley never be solved
Nuanced disagree (i agree with your physics), in that an element of the issue is design. Kids running out between cars _on streets that stack building --> yard --> sidewalk --> parked cars --> driving cars.
One simple change could be adding a chain link fence / boundary between parked cars and driving cars, increasing the visibility and time.
How do you add a chain link fence between the parked and driving cars for on-street parking?
2 replies →
Second-order benefit: More Waymos = fewer parked cars
In high parking contention areas, I think there's enough latent demand for parking that you wouldn't observe fewer parked cars until reduce demand by a much greater amount.
>We accept the risks with humans because those humans accept risk.
It seems very strange to defend a system that is drastically less safe because when an accident happens, at least a human will be "liable". Does a human suffering consequences (paying a fine? losing their license? going to jail?) make an injury/death more acceptable, if it wouldn't have happened with a Waymo driver in the first place?
I think a very good reason to want to know who's liable is because Google has not exactly shown itself to enthusiastically accept responsibility for harm it causes, and there is no guarantee Waymo will continue to be safe in the future.
In fact, I could see Google working on a highly complex algorithm to figure out cost savings from reducing safety and balancing that against the cost of spending more on marketing and lobbyists. We will have zero leverage to do anything if Waymo gradually becomes more and more dangerous.
> Wherever I'm going, I'll be there to apply the formula. I'll keep the secret intact. It's simple arithmetic. It's a story problem. If a new car built by my company leaves Chicago traveling west at 60 miles per hour, and the rear differential locks up, and the car crashes and burns with everyone trapped inside, does my company initiate a recall?
> You take the population of vehicles in the field (A) and multiple it by the probable rate of failure (B), then multiply the result by the average cost of an out-of-court settlement (C). A times B times C equals X. This is what it will cost if we don't initiate a recall. If X is greater than the cost of a recall, we recall the cars and no one gets hurt. If X is less than the cost of a recall, then we don't recall.
-Chuck Palahniuk, Fight Club
Even in terms of plain results, I'd say the consequences-based system isn't working so well if it's producing 40,000 US deaths annually.
That’s the fault of poor infrastructure and laws more than anything else. AV’s must drive in the same infrastructure (and can somewhat compensate).
Yes
Orders of magnitude? Something like 100 people die on the road in the US each day. If self-driving tech could save 10 lives per day, that’s wouldn’t be good enough?
"It depends". If 50 people die and 50 people go to jail, vs. 40 people die and their families are left wondering if someone will take responsibility? Then that's not immediately standing out as an improvement just because fewer died. We can do better I think. The problem is simply one of responsibility.
If the current situation was every day 40 people die but blame is rarely assigned, would you recommend a change where an additional 10 people are going to die but someone will be held responsible for those deaths?
1 reply →
People don't usually go to jail. Unless the driver is drunk or there's some other level of provable criminal negligence (or someone actively trying to kill people by e.g. driving into a crowd of protesters they disagree with), it's just chalked up as an accident.
Apart from a minority of car related deaths resulting in jail time, what kind of person wants many more people to die just so they can point at someone to blame for it? At what point are such people the ones to blame for so many deaths themselves?
In such situations it’s useful to put yourself in a hypothetical situation. Rules: you can’t pick who you will be: one of the dead or alive. It will be assigned randomly.
So would you pick situation 1 or 2?
I would personally pick 1.
Do they go to jail?
That is not my experience here in the Bay Area. In fact here is a pretty typical recent example https://www.nbcbayarea.com/news/local/community-members-mour...
The driver cuts in front of one person on an e-bike so fast they can’t react and hit them. Then after being hit they step on the accelerator and go over the sidewalk on the other side of the road killing a 4 year old. No charges filed.
This driver will be back on the street right away.
1 reply →
Have you been in a self driving car? There are some quite annoying hiccups, but they are already very safe. I would say safer than the average driver. Defensive driving is the norm. I can think of many times where the car has avoided other dangerous drivers or oblivious pedestrians before I realized why it was taking action.
I generally agree the bar is high.
But, human drivers often face very little accountability. Even drunk and reckless drivers are often let off with a slap on the wrist. Even killing someone results in minimal consequences.
There is a very strong bias here. Everyone has to drive (in most of America), and people tend to see themselves in the driver. Revoking a license often means someone can’t get to work.
That’s an incentive to reduce risk, but if you empirically show that the AV is even 10x safer, why wouldn’t you chalk that up as a win?
> Self driving needs to be orders of magnitude safer for us to acknowledge it. If they're merely as safe or slightly safer than humans we will never accept it
It’s already accepted. It’s already here. And Waymo is the safest in the set—we’re accepting objectively less-safe systems, too.
> Self driving needs to be orders of magnitude safer for us to acknowledge it
All data indicates that Waymo is ~10x safer so far.
"90% Fewer serious injury or worse crashes"
https://waymo.com/safety/impact/