← Back to context

Comment by bonsai_spool

7 hours ago

I think I'm misunderstanding - they're converting video into their representation which was bootstrapped with LIDAR, video and other sensors. I feel you're alluding to Tesla, but Tesla could never have this outcome since they never had a LIDAR phase.

(edit - I'm referring to deployed Tesla vehicles, I don't know what their research fleet comprises, but other commenters explain that this fleet does collect LIDAR)

They can and they do.

https://youtu.be/LFh9GAzHg1c?t=872

They've also built it into a full neural simulator.

https://youtu.be/LFh9GAzHg1c?t=1063

I think what we are seeing is that they both converged on the correct approach, one of them decided to talk about it, and it triggered disclosure all around since nobody wants to be seen as lagging.

Tesla does collect LIDAR data (people have seen them doing it, it's just not on all of the cars) and they do generate depth maps from sensor data, but from the examples I've seen it is much lower resolution than these Waymo examples.

  • Tesla does it to map the areas to come up with high def maps for areas where their cars try to operate.

    • Tesla uses lidar to train their models to generate depth data out of camera input. I don’t think they have any high definition maps.