Comment by senordevnyc

10 hours ago

Yeah, but your "cameras" also have a bunch of capabilities that hardware cameras don't, plus they're mounted on a flexible stalk in the cockpit that can move in any direction to update the view in real-time.

Also, humans kinda suck at driving. I suspect that in the endgame, even if AI can drive with cameras only, we won't want it to. If we could upgrade our eyeballs and brains to have real-time 3D depth mapping information as well as the visual streams, we would.

What "a bunch of capabilities"?

A complete inability to get true 360 coverage that the neck has to swivel wildly across windows and mirrors to somewhat compensate for? Being able to get high FoV or high resolution but never both? IPD so low that stereo depth estimation unravels beyond 5m, which, in self-driving terms, is point-blank range?

Human vision is a mediocre sensor kit, and the data it gets has to be salvaged in post. Human brain was just doing computation photography before it was cool.

  • What do you believe the frame rate and resolution of Tesla cameras are? If a human can tell the difference between two virtual reality displays, one with a frame rate of 36hz and a per eye resolution of 1448x1876, and another display with numerically greater values, then the cameras that Tesla uses for self driving are inferior to human eyes. The human eye typically has a resolution from 5 to 15 megapixels in the fovea, and the current, highest definition automotive cameras that Tesla uses just about clears 5 megapixels across the entire field of view. By your criterion, the cameras that Tesla uses today are never high definition. I can physically saccade my eyes by a millimeter here or there and see something that their cameras would never be able to resolve.

    • Yep, Tesla's approach is 4% "let's build a better sensor system than what humans have" and 96% "let's salvage it in post".

      They didn't go for the easy problem, that's for sure. I respect the grind.

      2 replies →