← Back to context

Comment by top_sigrid

6 hours ago

This is so dumb, I don't even know if you are serious. Nobody ever said it is lidar instead of cameras, but as additional sensor to cameras. And everybody seems to agree that that is valuable sensor-information (except Tesla).

I'm able to drive without lidar, with just my eyeball feeds.

I agree that lidar is very valuable right now, but I think in the endgame, yeah it can drive with just cameras.

The logic follows, because I drive with just "cameras."

  • Yeah, but your "cameras" also have a bunch of capabilities that hardware cameras don't, plus they're mounted on a flexible stalk in the cockpit that can move in any direction to update the view in real-time.

    Also, humans kinda suck at driving. I suspect that in the endgame, even if AI can drive with cameras only, we won't want it to. If we could upgrade our eyeballs and brains to have real-time 3D depth mapping information as well as the visual streams, we would.

    • What "a bunch of capabilities"?

      A complete inability to get true 360 coverage that the neck has to swivel wildly across windows and mirrors to somewhat compensate for? Being able to get high FoV or high resolution but never both? IPD so low that stereo depth estimation unravels beyond 5m, which, in self-driving terms, is point-blank range?

      Human vision is a mediocre sensor kit, and the data it gets has to be salvaged in post. Human brain was just doing computation photography before it was cool.

      4 replies →