← Back to context

Comment by RoxiHaidi

3 hours ago

One day an AI will obviously be infinitely better at driving than a human will be but that day is not yet here.

it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.

> that day is not yet here

Have you been in a Waymo? SAE Level 4 is here, and it’s safer than humans [1].

[1] https://waymo.com/safety/impact/

  • Not the OP, but I have! I also have FSD v14 in my Tesla

    Vastly VASTLY prefer Waymo. It's very good at its mission and is, at minimum, infinitely better than being in an Uber rideshare. I'd rather wait 20 minutes for a Waymo than 5 for any Uber or 0 to use my own car.

    Ironically, Waymo got me much more interested in using my city's public transportation offering which is much better than I previously thought.

    That said, Tesla FSD v14 is the best autonomous option for a supervised system that you can actually use.

  • This is Waymo saying Waymo cars are safer than humans. Obviously the "it’s safer than humans" claim is selection biased, statistically underpowered apples-to-oranges comparison with limited sample size

    • > Obviously the "it’s safer than humans" claim is selection biased, statistically underpowered apples-to-oranges comparison with limited sample size

      I haven't seen a good criticism of their methodology. If you have one, I'd be curious about their take.

      On a more-direct measure, Waymos have had starkly lower fatalities and at-risk incidents than human drivers on average and, I think, near their best.

Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it

I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage

  • > to minimize other damage

    You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.

    What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?

    I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?

    *meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids

    • We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.

      I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.

    • > You mean deaths to multiple other people, do you not

      I mean deaths the AI predicts for other people, yes

      And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.

      7 replies →

  • > not sure I would trust it

    This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.

    On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.

    • Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?

      I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.

      Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.

      I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?

      1 reply →

  • What would that guarantee look like and would it be legal to sell a product that made that guarantee?

    "Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.

  • I find it interesting that you don't give other drivers any consideration in your analysis.

    • Other drivers should take public transit if they don't want to / are afraid to operate their own vehicles

      As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be

      2 replies →

Was it 2015 when HN was full of prediction we won't be driving in five years? From what I see the serious accidents with human drivers are caused by deliberately doing the dangerous thing (in my corner of the world - mostly overtaking at the wrong place or time, or both). Besides that humans drive very safely. Outside of the tightly controlled environment I don't see self-driving getting any better till systems have a proper world-model. So, maybe never.