← Back to context

Comment by srameshc

12 hours ago

Does anyone with a better understanding about LIDAR vs camera approach to autonomous drivng explain how would Tesla handle such situation ?

This is a HW4 Tesla on FSD 14.3.2 trying to drive into a lake five days ago (a la The Office): https://www.reddit.com/r/TeslaFSD/comments/1t9rl2u/fsd_tried..., so I would not say Tesla has solved standing water yet.

That said, FSD seems quite capable of routing around standing water in many cases (e.g. https://xcancel.com/planoken/status/2030754820462633031, https://www.reddit.com/r/TeslaFSD/comments/1pw9f2m/fsd_navig..., https://xcancel.com/BLKMDL3/status/1991862465328779317, https://xcancel.com/JVTacoma/status/2046313902749921638), so handling the remaining cases seems more like a model intelligence / data issue rather than a sensor limitation. Lidar beams generally bounce off mirrorlike surfaces without returning to the sensor, so I think all lidar would tell you about standing water is "there's something shiny/reflective within this region of the image", which you already know from cameras+headlights.

Waymo has LIDAR and cameras, so it is better equipped for every situation.

  • This seems tautological, but in practice, you might expect to see different results.

    Engineering hours are finite, so if they're spread across interpreting signals from two different sources, they might not go deep enough to make either one as good as it could be.

    Having your engineering resources more focused on a particular approach might actually yield better results.

    I say this as someone who's dealing with LiDAR + vision vs pure vision in a different domain, and at this point, I actually think our pure vision systems are better.

    • More often than not, constraints refine and focus a project, rather than restricting it. It’s best to start work with as few variables as possible, and only add new ones when absolutely necessary; You make a lot more progress that way.

      For very complex things like AVs, it is critically important to keep the number of such variables down, since each acts on complexity & workload not as an addition but more like a quadratic, or worse—combinatorial explosion.

  • Unless the power is out

    https://abc7news.com/post/san-francisco-leaders-press-waymo-...

    • Kind of unrelated. That issue was due to a misguided effort to be cautious by having vehicles requesting human-review when they didn't really need it. Waymo fixed the issue by allowing the vehicles to operate in their normal, independent, mode.

    • part of the problem is that SFs traffic lights just turn off in a power outage, rather than flashing red battery power as I have seen in many other jurisdictions

LIDAR isn't helpful for water. Standing water behaves like a mirror on LIDAR.

  • This is one of the reasons why I'm suspicious of camera-only systems, here in Finland. Half the year there's a lot of snow and ice around. Which I imagine means most of the view is "white" and "shiny". Coupled with the dark winters it's gotta be a nightmare to deal with.

  • Not necessarily. Depending on angle and water depth, multi-return LIDAR can give you returns from both water surface and the road surface beneath, in the same way multi-return LIDAR can produce returns from vegetation and the ground beneath.

  • Could you use a different spectrum of EM radiation to detect water? There are parts of the microwave band that attenuate the signal by absorption and I wonder if you could use that. The only clue a human driver has in that situation is in the visible spectrum. The lines of the road disappear from view, which can be challenging to see at night.

    • In theory you could send different frequencies, but then you run afoul of all kinds of potential interference with other systems and other local regulations.