← Back to context

Comment by delichon

1 day ago

Disagree. The current limitations of Tesla self driving are not around difficulties in judging distances that lidar solves. They're around inference deficiencies with accurate geometry.

It must be a bit embarrassing having Waymo and Baidu cracking ahead with the driverless taxis while the Tesla ones still don't work well though.

If the AI was good enough, vision-only self-driving would be at least as good as the best human.

The AI isn't good enough. I'm starting to suspect that current ML learning rates can't be good enough in reasonable wall-clock timeframes due to how long it takes between relevant examples for them to learn from.

It's fine to lean on other sensory modalities (including LIDAR, radar, ultrasound, whatever else you fancy) until the AI gets good enough.

  • It's safer than human drivers now. That's good enough. It will take more than that to convince world, and it should. I applaud the well earned skepticism. But I'm an old guy who has no problem qualifying for a driver's license, and if you replaced me with FSD 14.2, especially under not ideal conditions like at night or in a storm, everyone would be safer.

    I predict a cusp to be reached in the next few years when safety advocates flip from trying to slow down self driving to trying to mandate it.

    • I can't speak to your driving level, but everything I see about Tesla's FSD has unfortunately been giving me "this seems sus" vibes even back when I was extremely optimistic about them in particular and self driving cars more generally (so, last decade).

      Unfortunately, the only stats about Tesla's FSD that I can find are crowd-sourced, and what they show is that despite recent improvements, they're still not particularly good.

      Also unfortunately, the limited geo-fencing of the areas in which the robo-taxi service operates, and that they initially* launched the service without the permits to avoid needing a human safety monitor, strongly suggests that it hasn't generalised to enough domains yet.

      Lack of generality means that it's possible for you to be 100% right about Tesla's FSD on the roads you normally use, and yet if you took them a little bit outside that area you might find the AI shocking you by reliably disengaging for no human-apparent reason while at speed and leaving you upside down in a field.

      * I'm not sure what has or hasn't changed since launch: all the news reporting on this was from sites with more space dedicated to ads than to copy, so IMO slop news irregardless of if it was written by an AI or not

  • No reason we can't rely on other sensory modalities after the AI "gets good enough," either. Humans don't have LIDAR, but that doesn't mean that LIDAR is a "cheat" for self-driving cars, or something we should try to move past.

    • In principle, I agree; but remember that people like to save money, and that includes by not spending on excessive sensors when the minimum set will do.

      What I think went wrong with Musk/Tesla/FSD is that he tried to cut costs here to save money before it would actually save money.

LIDAR provides dense point clouds from which you can derive geometry that Tesla's vision methods struggle to perceive.

(Subtle things, like huge firetrucks parked straight across the road.)