← Back to context

Comment by cs702

3 years ago

Please keep in mind that I'm talking about FSD Beta, not the current production software, which is dozens of versions behind.

Are you on Beta 10.69.2.3?

If I had to articulate my reasons:

* Judging by the videos available online, my perception is that many situations that were impossible for Tesla FSD Beta a year ago have become uneventful in recent weeks. Take a look at Chuck Cook's videos for example (I like the fact that he always highlights the failures).

* Judging again by the videos available online, my perception is that Tesla FSD Beta has encountered and had to deal with more crazy edge cases than any other system. A possible explanation for this is that for a long time Tesla FSD Beta hasn't been geofenced or restricted only to certain types of roads, like highways. You can test it anywhere in North America.

* Tesla FSD Beta currently has 160,000 individuals testing it without road restrictions. As far as I know, no other system has been exposed to similar open-ended large-scale testing.

* Occupancy networks look like a real breakthrough to me -- DNNs that predict whether each voxel in a 3D model is occupied by an object, using only video data as an input. I understood the high-level explanation of these DNNs on AI Day 2. I haven't seen anything like it from anyone else.

* Tesla's DOJO also looks like a breakthrough to me. I understood the high-level explanation of it on AI Day 2. IIRC, DOJO cabinets are 6x faster at training existing neural networks than Nvidia rigs, at 6x lower cost, so call it ~36x more efficient.

As someone who also likes to watch Chuck Cook, I don't think Tesla is close to waymo.

Tesla fsd in its current state will either crash or do some serious fuck up if you let it unattended for a few hours or maybe less (based on the disengagements in those videos). Forget about driverless Tesla with the current fsd. Waymo has been operating driverless since 2019.

I do agree that it is progressing very nicely. Imo tesla fsd needs 2 more years and a hardware update and it will be there.

  • I totally agree with you that Tesla FSD seems more likely yo have a serious crash, but if actual fully autonomous cars are the end goal, then the behavior of learning/beta models doesn't really matter except to the extent that it let's them get to the end goal (for the sake of this argument).

    All fully autonomous cars are in a different legal situation then Tesla. Tesla sells Joe Shmoe a car and then tells him he can rub FSD but he's responsible and has to remain attentive then they get info about every disengagement and (mostly) avoid legal responsibility or accidents in many cases.

    Waymo is fully responsible for every accident, etc so they HAVE to proceed more cautiously or they'll lose the ability to run their cars. As someone else pointed out they often are only operated in very specific areas, and often even specific streets within a geofence. So while on the surface Waymo may have full self driving operating more effectively with less problems, they're doing so in a much more controlled environment and not getting the variety of data that Tesla has from cars disengaging Literally anywhere in the US.

    • > but if actual fully autonomous cars are the end goal, then the behavior of learning/beta models doesn't really matter

      i didn't sign up to be killed by some idiot tech bro testing a class project where they plumbed alexnet into the steering wheel of a 2000kg vehicle and took a couple of steps downhill

      the streets are already dangerous enough for pedestrians

  • I've yet to see how Waymo and other self-driving systems perform in open-ended testing, outside tightly restricted, geofenced environments.

    Otherwise, I agree that Tesla FSD Beta has been progressing nicely. I don't know if it will take 1, 2, or 5 years to get FSD Beta to an acceptable rate of graceful failures, but I agree it looks likely to get there before the end of the decade!

    • I've yet to see Tesla FSD Beta perform in any meaningful operational environment, no matter how restricted, within a factor of 100x of Waymo or Cruise, and Waymo and Cruise are still at least a factor of 10x off of "equal to human drivers". Superiority can not be claimed if a system is unacceptably bad in all circumstances (i.e. 1,000x worse than the minimum acceptable standard for real commercial usage) and worse in every case than a comparable alternative. It only makes sense if the system is better at something other than being allowed to create unacceptably bad results in a more diverse range of circumstances.

      2 replies →

    • >I've yet to see how Waymo and other self-driving systems perform in open-ended testing, outside tightly restricted, geofenced environments

      But you have seen how Tesla performs in such environments, and you aren't allowed to take your hands off the wheel.

      What makes you assume Tesla has the right approach and the other have companies have to be measured against it?

      1 reply →

I'd sum up your points 1,2,3 as "more data". This would be a reason to think they can one day be ahead if they can take advantage of this, but not evidence that they are currently ahead.

Occupancy networks: waymo has published research on this before Tesla announced this at AI day (not clear to me who got there first though https://arxiv.org/pdf/2203.03875v1.pdf)

Tesla's Dojo -> Waymo has TPUs to train on

To me all of this is outweighed by the fact that Waymo has a driverless deployment and Tesla does not. I am pretty biased because as a Tesla owner I am pretty pissed off at this point at how the false positives on the system in detecting close following are stopping my safety score from getting high enough to even be able to access the product I purchased.

But it is pretty hard to say one way or another.

  • > I'd sum up your points 1,2,3 as "more data". This would be a reason to think they can one day be ahead if they can take advantage of this, but not evidence that they are currently ahead.

    I'd sum up those three points as "more data and more real-world, open-ended, large-scale testing by regular people." Big difference.

    > Occupancy networks: waymo has published research on this before Tesla announced this at AI day (not clear to me who got there first though https://arxiv.org/pdf/2203.03875v1.pdf)

    AFAIK, Tesla FSD Beta is the only system that has been using these DNNs for open-ended testing.

    > Tesla's Dojo -> Waymo has TPUs to train on

    I've trained AI models on TPUs. They're nowhere near close to 36x more efficient than Nvidia GPUs.

    > I am pretty biased because as a Tesla owner I am pretty pissed off at this point at how the false positives on the system in detecting close following are stopping my safety score from getting high enough to even be able to access the product I purchased.

    Oh, I get your frustration... but I also understand why Tesla is being so strict with safety scores at this point. It wouldn't be fair to blame them for that.

    • TPUs are hard to use outside of Google, (I have tried in and out of Google). I think the situation is improving, but the efficiency from using a large pod is really remarkable. What topology did you train AI models on? Within Google it's common to train across a whole pod or even across multiple pods 8x16x16 is the largest currently.

      Also if Tesla actually published numbers on an MlPerf benchmark, I would be more inclined to believe claims about 36x better efficiency.

      https://mlcommons.org/en/training-normal-20/

      The fastest times I'm seeing here for image classification and for object detection (not the same, but probably closest proxy out of the tasks benchmarked) are for TPUs.

      To know who has better training technology I don't think you should be using a cost-efficiency metric, it seems to me the best thing to use would be who can train networks the fastest. Cost metrics are easy to game especially if you are the ones making the chips (Of course them making chips is cheaper than buying Nvidia chips for them once the capital investment is made). To measure who is ahead in technology, I think you have to look at who can train models the fastest, and right now as far as I can tell, TPUs are unbeat for this. (Although practically speaking it's hard to pull off these large topology things externally and there are also other caveats with ML perf related to how the training setups are optimized, but nonetheless, it's a better signal than what Elon says in a presentation :) )

    • And as far as safety scores, my point is that the safety score is calculated incorrectly because of obvious false positives with "close following". I'm talking being nowhere near a car, getting an alert that says I'm following too close, and that dropping my safety score. I understand why the bar is high, but at this point I honestly suspect there is some tomfoolery going on with how that score is calculated.

    • You think it's actually "safety score" and not Tesla protecting their brand by restricting who gets to demonstrate the system?

      Maybe you should consider that when watching YouTube videos of people using it....

>haven't seen anything like it from anyone else.

Have you considered that other companies don't make it a priority to market these things? Elon knows his audience: people who will go on message boards and talk about it. Most people don't care about the underlying AI tech.

Do you think the other companies aren't making any breakthroughs? How do they have Robotaxis then?

Your entire claimed expertise seems to come from YouTube promotional videos. Maybe take a step back from marketing hype.

I really feel that you have been duped here. There is no reason to believe that Tesla Dojo even exists. At Hot Chips last year they showed some 3D renders of their supposed board. At Hot Chips this year they showed the same renders. At "AI Day" last month they showed a retarded humanoid robot. We have no basis to conclude that Dojo does, can, or will exist.