Comment by kypro

3 days ago

From the perspective of viewing FSD as an engineering problem that needs solving I tend to think Elon is on to something with the camera-only approach – although I would agree the current hardware has problems with weather, etc.

The issue with lidar is that many of the difficult edge-cases of FSD are all visible-light vision problems. Lidar might be able to tell you there's a car up front, but it can't tell you that the car has it's hazard lights on and a flat tire. Lidar might see a human shaped thing in the road, but it cannot tell whether it's a mannequin leaning against a bin or a human about to cross the road.

Lidar gets you most of the way there when it comes to spatial awareness on the road, but you need cameras for most of the edge-cases because cameras provide the color data needed to understand the world.

You could never have FSD with just lidar, but you could have FSD with just cameras if you can overcome all of the hardware and software challenges with accurate 3D perception.

Given Lidar adds cost and complexity, and most edge cases in FSD are camera problems, I think camera-only probably helps to force engineers to focus their efforts in the right place rather than hitting bottlenecks from over depending on Lidar data. This isn't an argument for camera-only FSD, but from Tesla's perspective it does down costs and allows them to continue to produce appealing cars – which is obviously important if you're coming at FSD from the perspective of an auto marker trying to sell cars.

Finally, adding lidar as a redundancy once you've "solved" FSD with cameras isn't impossible. I personally suspect Tesla will eventually do this with their robotaxis.

That said, I have no real experience with self-driving cars. I've only worked on vision problems and while lidar is great if you need to measure distances and not hit things, it's the wrong tool if you need to comprehend the world around you.

This is so wild to read when Waymo is currently doing like 500,000 paid rides every week, all over the country, with no one in the driver's seat. Meanwhile Tesla seems to have a handful of robotaxis in Austin, and it's unclear if any of them are actually driverless.

But the Tesla engineers are "in the right place rather than hitting bottlenecks from over depending on Lidar data"? What?

  • I wasn't arguing Tesla is ahead of Waymo? Nor do I think they are. All I was arguing was that it makes sense from the perspective of a consumer automobile maker to not use lidar.

    I don't think Tesla is that far behind Waymo though given Waymo has had a significant head start, the fact Waymo has always been a taxi-first product, and given they're using significantly more expensive tech than Tesla is.

    Additionally, it's not like this is a lidar vs cameras debate. Waymo also uses and needs cameras for FSD for the reasons I mentioned, but they supplement their robotaxis with lidar for accuracy and redundancy.

    My guess is that Tesla will experiment with lidar on their robotaxis this year because design decisions should differ from those of a consumer automobile. But I could be wrong because if Tesla wants FSD to work well on visually appealing and affordable consumer vehicles then they'll probably have to solve some of the additional challenges with with a camera-only FSD system. I think it will depend on how much Elon decides Tesla needs to pivot into robotaxis.

    Either way, what is undebatable is that you can't drive with lidar only. If the weather is so bad that cameras are useless then Waymos are also useless.

    • What causes LiDAR to fail harder than normal cameras in bad weather conditions? I understand that normal LiDAR algorithms assume the direct paths from light source to object to camera pixel, while a mist will scatter part of the light, but it would seem like this can be addressed in the pixel depth estimation algorithm that combines the complex amplitudes at the different LiDAR frequencies.

      I understand that small lens sizes mean that falling droplets can obstruct the view behind the droplet, while larger lens sizes can more easily see beyond the droplet.

      I seldom see discussion of the exact failure modes for specific weather conditions. Even if larger lenses are selected the light source should use similar lens dimensions. Independent modulation of multiple light sources could also dramatically increase the gained information from each single LiDAR sensor.

      Do self-driving camera systems (conventional and LiDAR) use variable or fixed tilt lenses? Normal camera systems have the focal plane perpendicular to the viewing direction, but for roads it might be more interesting to have a large swath of the horizontal road in focus. At least having 1 front facing camera with a horizontal road in focus may prove highly beneficial.

      To a certain extend an FSD system predicts the best course of action. When different courses of action have similar logits of expected fitness for the next best course of action, we can speak of doubt. With RMAD we can figure out which features or what facets of input or which part of the view is causing the doubt.

      A camera has motion blur (unless you can strobe the illumination source, but in daytime the sun is very hard to outshine), it would seem like an interesting experiment to:

      1. identify in real time which doubts have the most significant influence on the determination of best course of action

      2. have a camera that can track an object to eliminate motion blur but still enjoy optimal lighting (under the sun, or at night), just like our eyes can rotate

      3. rerun the best course of action prediction and feed back this information to the company, so it can figure out the cost-benefit of adding a free tracking camera dedicated to eliminating doubts caused by motion blur.

  • Tesla has driven 7.5B autonomous miles to Waymo's 0.2B, but yes, Waymo looks like they are ahead when you stratify the statistics according to the ass-in-driver-seat variable and neglect the stratum that makes Tesla look good.

    The real question is whether doing so is smart or dumb. Is Tesla hiding big show-stopper problems that will prevent them from scaling without a safety driver? Or are the big safety problems solved and they are just finishing the Robotaxi assembly line that will crank out more vertically-integrated purpose-designed cars than Waymo's entire fleet every day before lunch?

    • Tesla's also been involved in WAY more accidents than Waymo - and has tried to silence those people, claim FSD wasn't active, etc.

      What good is a huge fleet of Robotaxis if no one will trust them? I won't ever set foot in a Robotaxi, as long as Elon is involved.

      5 replies →