Comment by estearum

10 hours ago

I don't think that's the reasoning.

The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.

Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.

Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.

Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.

  • Sensor fusion seems like it'd be a big problem when you're handcoding lots of C++, and way less of a problem when all the sensors are just feeding into one big neural network, as Tesla and probably others are doing now. The training process takes care of it from there.

    One of Udacity's first courses was on self-driving, taught by Sebastian Thrun who later cofounded Waymo. He went through some Bayesian math that takes a collection of lidar points, where each point contributes to a probabilistic assessment of what's really going on. It's fine if different points seem to contradict each other, because you're looking for the most likely scenario that could produce that combined sensor data. Transformers can do the same sort of thing, and even with different sensor types it's still the same sort of problem.

  • > Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams

    The response to the challenge shouldn't be whittling down your sensor-suite to a single type, but to get good at sensor fusion.

  • I think this is the key. In theory - more information stream when fused together (properly) should reduce error. If their stumbling block is the "properly" part, than the rest of those justifications come off as a pretty weak way to sidestep their own inabilities to deliver this properly.

    We have lots of evidence of similar strategies being used in other domains, this seems like an especially life-critical domain that ought to have high rigor and standards applied.

> how incredible the human brain is compared to computers.

It is pretty incredible but people will (rightly so?) hold automated drivers to an ultra high standard. If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.

  • > If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.

    This is evidently false. Robotaxi crash rates exceed human drivers', but there's not an effective regulatory agency to outlaw them!

    https://futurism.com/advanced-transport/tesla-robotaxis-cras...

    • According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false. I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.

      2 replies →

Musk has never been scared of vertically integrating something that's too expensive initially.

> Musk miscalculated on 1) cost reduction in LIDAR

Given that Musk has a history of driving lower costs, it's unlikely he overestimated the long-term cost floor. He just thought we were close to self-driving in 2014.

Another factor is Andrej Karpathy, who was the primary architect for the vision-only approach. Musk wanted fewer parts, and Karpathy believed he could deliver that. Karpathy is still an advocate of vision-only.

> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.

And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.

  • And also ignorant about how those two eyes have binocular vision, adjustable positions, and can look in multiple mirrors for full spatial awareness.

    • There are good arguments but this isn’t one. Many humans (like me!) drive fine without binocular vision. And the cars have many cameras all around, with wide angle lenses that are watching everything all the time, when a human can only focus in one direction at a time.

      3 replies →

Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.

He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.

There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.

Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.

  • I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!

IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.

This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.

Considering he also runs a company that puts computer chips inside brains to augment them you’d think he ought to have a more sound understanding as to the limits of both.

There certainly is a pretty on going miscalculation regarding human intelligence, and consrquentially, empathy.

Seeing the SOTA in FSD techs it is not obvious that Musk made a miscalc so far.

  • Nah

    If the data were positive for Tesla, Tesla would publish it

    They do not, so one can infer it is not flattering

    (Before you post the "Miles driven with FSD" chart, you should know upfront (as Tesla must) that chart doesn't normalize by age of vehicle or driving conditions and is therefore meaningless/presumably designed to deceive)