← Back to context

Comment by z7

4 hours ago

The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.

The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.

Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.

All of your arguments are expounded upon in the article itself, and their conclusions still hold, based on the publicly available data.

The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.

The denominator problem is made up. Tesla Robotaxi has only been launched in one location, Austin, and only since July (well, 28th June, so maybe there is a few days discrepancy?). So the crash data and the miles data can only refer to this same period. Furthermore, if the miles driven are actually based on some additional length of time, then the picture gets even worse for Tesla, as the denominator for those 9 incidents gets smaller.

The analysis indeed doesn't distinguish between the types of accidents, but this is irrelevant. The human driver estimates for miles driven without incident also don't distinguish between the types of incidents, so the comparison is still very fair (unless you believe people intentionally tried to get the Tesla cars to crash, which makes little sense).

The comparison to Waymo is also done based on incidents reported by both companies under the same reporting requirements, to the same federal agency. The crash definitions and reporting practices are already harmonized, at least to a good extent, through this.

Overall there is no way to look at this data and draw a conclusion that is significantly different from the article: Tesla is bad at autonomous driving, and has a long way to go until it can be considered safe on public roads. We should also remember that robotaxis are not even autonomous, in fact! Each car has a human safety monitor that is ready to step in and take control of the vehicle at any time to avoid incidents - so the real incident rate, if the safety monitor weren't there, would certainly be even worse than this.

I'd also mention that 5 months of data is not that small a sample size, despite you trying to make it sound so (only 9 crashes).

TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles. I think that's high - if you're including backing into static objects in car parks and the like, you can look at workshop data and extrapolate that a lower figure might be closer to the mark.

TFA also does a comparison with other self-driving car companies, which you acknowledge, but dismiss: however, we can't harmonize crash definitions and reporting practices as you would like, because Tesla is obfuscating their data.

TFA's main point is that we can't really know what this data means because Tesla keep their data secret, but others like Waymo disclose everything they can, and are more transparent about what happened and why.

TFA is actually saying Tesla should open up their data to allow for better analysis and comparison, because at the moment their current reporting practice make them look crazy bad.

  • > TFA's main point is that we can't really know what this data means because Tesla keep their data secret

    If that's so, then the article title is very poor.

Tesla could share real/complete data at any time. The fact that they don't is likely and indicator the data does not look good.

  • You can do this with every topic. XYZ does not share this, so IT MUST BE BAD.

    • Yes, that's very often the case with things that would very likely be shared if it looked good.

      There are things that don't get shared out of principle. For example there are anonymous votes or behind the scenes negotiations without commitment or security critical data.

      But given that Musk tends to parade around vague promises since a very long time, it seems sharing data that looks very good would certainly be something they would do.

I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.

  • Thank you. Everyone is hiding disengagement and settling to hide accidents. This will not be fixed or standardized without changes to the laws, which for self driving have been largely written by the handful of companies in the space. Total, complete regulatory capture.

I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.

  • While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.

    I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.

    • It’s not weird. They have a history of over promising to the point that one could say they just straight up lie on a regular basis. The bar is higher for them because they have abused the public’s trust and it has to be earned again.

      The results have to speak for Tesla very loudly and very clearly. And so far they don’t.

      1 reply →

    • Tesla (Elon Musk really) has a long history of distorting the stats or outright lying about their self driving capabilities and safety. The fact that folks would be skeptical of any evidence Tesla provided in this case is a self-inflicted problem and well-deserved.

      1 reply →

  • This has nothing to do with burden of proof, it has to do with journalistic accuracy, and this is obviously a hit piece. HN prides itself on being skeptical and then eats up "skeptic slop."

  • >I think it's fair to put the burden of proof here on Tesla.

    That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".

    https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

    More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".

    • The point is this: the article writer did what research they could do given the available public data. It's true that their title would be much more accurate if it said something like "Tesla's Robotaxi data suggests crash rate may be up to 3x worse than human drivers". It's then 100% up to Tesla to come up with cleaner data to help dispel this.

      But so far, if all the data we have points in this direction, even if the certainty is low, it's fair to point this out.

    • I don’t think it’s a motte and Bailey fallacy because the motte is not well established. Tesla clearly does not believe that the burden of proof is on them, and by extension regulators, legislators.

Oh. Well then. May we see the details of these minor contact events so that people don’t have to come here and lie for them anymore?

How corrupt and unaccountable to the public is the city of Austin Texas, even, for allowing them to turn in incident reports like this?

electrek.co has a beef with Tesla, at least in the recent years.

  • Absolutely.

    Let's examine the Elektrek editor's feed, to understand how "impartial" he is about Tesla:

    https://x.com/FredLambert

    • Yup.

      Btw, do you happen to know, why electrek.co changed their tune in such a way? I was commenting on a similarly negative story by the same site, and said that they are always anti-Tesla. But then somebody pointed out that this wasn't always the case, that they were actually supportive, but then suddenly turned.

      1 reply →

"insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".

> The comparison isn't really like-for-like.

This is a statement of fact but based on this assumption:

> low-speed contact events that would often never show up as police-reported crashes for human drivers

Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.

oh hacker news, never change. "crashes 3x as much as human driven cars" but is that REALLY bad? who knows? pure gold

Also worth pointing out that Elektrek's coverage of Tesla turned sour since the editor Fred Lambert took a particular dislike to the brand, and to Elon Musk.

It's pretty clear from his X feed:

https://x.com/FredLambert

The guy has serious Musk Derangement Syndrome.