Comment by gamblor956

3 years ago

My perceptions contradict this article: (1) The technology is progressing faster than is generally recognized, with vehicles getting progressively better at dealing with edge cases and handling failures gracefully. (2) Judging by the videos I've watched online, Tesla is significantly ahead of everyone else.

We must be watching different videos. And experiencing Teslas differently. I see Teslas constantly slamming on their breaks on freeways, swerving across lanes, and avoiding collisions by the narrowest margins only because their owners took control before certain death. And that's just driving around in L.A. traffic, the YouTube videos are even worse. Tesla's vaunted camera-based system still can't recognize white semis or other broad, flat obstacles that a human or radar-based system would recognize instantly.

Tesla was ahead of their competitors, several years ago. Now they're way behind, and dropping further behind with every "update" that addresses the problems that got media coverage with "solutions" that indicate brittle, manual programmer overrides rather than any sort of scalable AI-driven capability.

And it's irrelevant that Tesla has 160,000 drivers on the road "training" the system, since they selected the drivers who drive in the safest road conditions using a "safe driver" metric that has no relationship to safe driving. This means that Tesla's "AI" (to the extent it can be called that) is being overwhelmed with tons of useless data that overtrains it to drive easy roads and with almost no training for difficult conditions or edge cases. For point of comparison, most vehicles today with advanced cruise control can drive the same roads that FSD can safely drive...but they don't need advanced AI to do it.

It doesn't matter how far ahead you were at the beginning of the race, it matters how far ahead you are at the finish line.

Oh my dog.

Part of this is the fault of Tesla's marketing, but you are wildly off mark. The cars you are seeing are Autopilot, not FSD. Most of them are even the even older, radar-based autopilot.

Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.

Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.

Your comment about useless data is also wrong. They are experts in their field and they know exactly what type of data they need. Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.

Your claim about cruise control from competitors being equal to FSD is laughable. They don't even match Autopilot: https://www.youtube.com/watch?v=xK3NcHSH49Q&list=PLVa4b_Vn4g...

  • Going to post this here as a rebuttal, a video made by Tesla fans that shows some severe shortcomings in the current version of FSD.

    https://insideevs.com/news/616509/tesla-full-self-driving-be...

    TLDR: a Tesla can't identify a box in the road. IO can finally identify people, but it still doesn't do a good job of avoiding them.

    Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.

    Both of these statements are false. Tesla Vision still has trouble detecting white semis as of October 2022. There are no self-driving vehicles that use radar for navigation(you appear to be mixing up radar with LIDAR, which has range-sensing built in, and all of Tesla's competitors are able to tell trucks apart from bridges; truck identification failure is unique to Tesla), though many regular modern cars do use it for autobraking systems. As these systems are only intended for use at extremely short ranges directly in front of the vehicle, it's irrelevant whether the object detected is a bridge or a semi.

    Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.

    This does not match reality. At all. Teslas still regularly swerve themselves across lanes of traffic and into oncoming traffic. In a brand new Tesla acquired by a co-worker several weeks ago, Tesla FSD could not identify cyclists on the road, failed to identify a number of pedestrians crossing at a crosswalk, did not successfully distinguish between semi trucks and the open sky, and only successfully identified about 1/2 of the other cars on the road with it. Maybe the super-duper secret version of Tesla Vision performs well, but the one actually available on Tesla vehicles right now performs worse than a drunk teenager.

    Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.

    This is demonstrably false; admission into the FSD program requires a safety score which cannot be achieved in areas with rough or steep roads, and is almost impossible to achieve in urban traffic, ergo, they are by definition not focusing on training unique/difficult situations. Moreover, as they still can't identify semi trucks, other cars, cyclists, or pedestrians with any reliability, the "great infrastructure" for "autolabeling" is basically just fraud.