Tesla's Robotaxi data confirms crash rate 3x worse than humans even with monitor

3 hours ago (electrek.co)

Tesla has completely fumbled a spectacular lead in EVs and managed to snatch defeat from the jaws of victory. And instead of turning it around, we're supposed to believe they are going to completely pivot and then take over a market with far more developed competitors (e.g. Boston Dynamics).

That Elon is riding this wave amidst the transparency of the whole thing is the funniest part. It's like watching people lose money at the "three cup" game but the cups are clear.

The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.

The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.

Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.

  • I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.

    • While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.

      I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.

    • >I think it's fair to put the burden of proof here on Tesla.

      That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".

      https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

      More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".

  • I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.

  • > The comparison isn't really like-for-like.

    This is a statement of fact but based on this assumption:

    > low-speed contact events that would often never show up as police-reported crashes for human drivers

    Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.

  • "insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".

  • [flagged]

As far as I understand, those Robotaxis are only available within Austin so far. That is slow city traffic, the number of miles per ride is very small. However the number for human drivers seem to take all kind of roads into respect. Of course, highways are the roads where you drive most of the distance at the least risk for an accident. Has this been taken into account for the evaluation?

It would be ironic that people are claiming the Tesla numbers for Autopilot are to optimistic, as it is used on highways only and at the same time don't notice that city-only numbers for the FSD would be pessimistic statistics-wise.

  • It does look extremely pessimistic. Like one of the 'incident' is that they hit a curb at a parking lot at 6 MPH.

    No human driver would report this kind of incident. A human driver would probably forget it after the next traffic light.

    While it's clearly Tesla's fault (if you hit any static object it's your fault), when you take this kind of 'incident' into account of course it'd look worse than humans.

To be honest I think the true story here is:

> the fleet has traveled approximately 500,000 miles

Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.

  • The more I think about your comment on statistics, the more I change my mind.

    At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.

    But the I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles.

    If we are comparing the Tesla accident rate to people in a consistent manner, it’s a valid comparison.

  • Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.

    At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.

  • > The real take away is that the Robotaxis don't really exist

    More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.

  • But deep learning is also about statistics.

    So if the crash statistics are insufficient, then we cannot trust the deep learning.

  • >One crash in this context is going to just completely blow out their statistics.

    One crash in 500,000 miles would merely put them on par with a human driver.

    One crash every 50,000 miles would be more like having my sister behind the wheel.

    I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

    If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.

    • > One crash in 500,000 miles would merely put them on par with a human driver.

      > One crash every 50,000 miles would be more like having my sister behind the wheel.

      I'm not sure if that leads to the conclusion that you want it to.

Elon promised self driving cars in 12 months back in 2017? He’s also promising Optimus robots doing surgery on humans in 3 years? Extrapolating…………… Optimus is going to kill some humans and it will all be worth it!

  • Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.

    There isn't enough money and most importantly margin in the car industry to warrant such a valuation, so he has to pivot away from cars into the next thing.

    Just to make an example of how risky it is to be a car company for Tesla.

    In 2025 Toyota has had: 3.5 times Tesla's revenue, 8 times the net income and twice the margin.

    And Toyota has a market cap that is 6 times lower than Tesla.

    It would take tesla a gargantuan effort to match Toyota's numbers and margins, and if it matched it...it would be a disaster for Tesla's stock.

    Hell, Tesla makes much less money than Mercedes Benz and with a smaller margin..

    Mercedes has 60% more revenue and twice the net income. Yet, Tesla is valued around 40 times Mercedes-Benz.

    Tesla *must* pivot away from cars and make it a side business or sooner or later that stuff is crashing, and it will crash fast and hard.

    Musk understands that, which is why he focusing on robo taxis and robots. It's the only way to sell Tesla to naive investors.

    • The best part of all of this is given their history, and the state of robotaxies as a whole, they will fail, and Tesla will crash. And it'll be a great day. The hype and obscene over valuation of them is utterly moronic.

      Look how much longer, and more experience Waymo has and they still have multiple issues a week popping up online, and thats with running them in a very small well mapped and planned out area. Musk wants robo taxies globally, that's just not happening, not any time soon and certainly not by the 10 year limit for him to get his trillion dollar bonus from Tesla, which is the only reason he's pushing so hard to make it happen.

    • > Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.

      I see nothing wrong here, correction back to reality.

      I understand why people adored him blindly in the early days, but liking him now after its clear what sort of person he is and always will be is same as liking trump. Many people still do it, but its hardly a defensible position unless on is already invested in his empire.

      1 reply →

All these self driving and "drivers assistance" features like lane keeping exist to satisfy consumer demand for a way to multitask when driving. Tesla's is particularly cancerous, but all of them should be banned. I don't care how good you think your lane keeping in whatever car you have is, you won't need it if you keep your hands on the wheel, eyes on the road, and don't drive when drowsy. Turn it off and stop trying to delegate your responsibility for what your two ton speeding death machine does!

  • I think it’s unfair to group all those features into “things for people who want to multitask while driving”.

    I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.

    These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.

    • There are two layers, both relating to concentration.

      Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.

      The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.

      1 reply →

  • Have you ever driven more than 200km at an average of 80km/h with enough turns on the highway? Perhaps after work, just to see your family once a month?

    Driver fatigue is real, no matter how much coffee you take.

    Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.

    EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.

    • If you're on a road trip, pull the fuck over and sleep. Your schedule isn't worth somebody else's life. If that's your commute, get a new apartment or get a new job. Endangering everybody else with drowsy driving isn't an option you should ever find tenable.

      1 reply →

  • But this is why people bought Tesla. Musk promised that the car is automatic.

    • Don’t be silly. Why would a reasonable person think “Full Self Driving” meant that a car would fully drive itself?

  • We made drunk driving super illegal and that still doesn't stop people. I would rather they didn't in the first place, but since they're going to anyway, I'd really rather they have a computer that does it better than they do. FSD will pull over and stop if the driver has passed out.

    • If we could ensure that only drunk people use driver assistance features, I'd be all for that. The reality is that 90% of the sober public are now driving like chronic drunks because they think their car has assumed the responsibility of watching the road. Ban it ALL.

      2 replies →

> showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025.

Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.

This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)

Edit: feel free to actually respond to the claim rather than downvote

  • I think what you say would have be fair if Elon's and his fanboys' stance was "we need more data" rather than "we will be able to scale self-driving cars very quickly, very soon".

As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.

  • > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

    Sorry that does not compute.

    It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.

    Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..

    • I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.

      But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.

  • > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.

    I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.

    In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.

    In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.

I am so tired of people defending Tesla. I’ve wrote off Tesla long time ago but what gets me are the people defending their tech. We all can go see the products and experience them.

The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.

  • We tend to defend companies that push the frontiers of self-driving cars, because the technology has the potential to save lives and make life easier and cheaper for everyone.

    As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.

    Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in two areas.

    I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.

    I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable. I think the end-to-end neural net with human-like sensors is more sensible than other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.

    Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.

    As much as I'd love to pile in on Tesla, it's unclear to me the severity of the incidents (I know they are listed) and if human drivers would report such things.

    "Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?

    By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".

    I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.

    For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.

    • He compared to the estimated statistics for non-reported accident (typically your example, that involve only one vehicle and only result in scratched paint) to estimate the 3x. Else the title would have been 9x (which is in line with 10x a data analyst blogger wrote ~ 3month ago).

      > roundabouts in cities (in Europe anyway) tend to increase the number of crashes

      Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.

    • >they tapped a bollard

      If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.

      With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.