← Back to context

Comment by fabian2k

4 hours ago

As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.

The "safety drivers" do nothing. They sit in the passenger seat and the only thing they have is a button that presumably stops the car and lets a remote operator take over.

> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.

I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.

In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.

In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.

> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

Sorry that does not compute.

It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.

Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..

  • The problem is we don't know how many incidents would have happened if there was no safety driver. How many times did the driver have to intervene to prevent an accident? IMO, that should count towards the number of AI-driven accidents

  • I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.

    But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.