Tesla concealed fatal accidents to continue testing autonomous driving

2 hours ago (rts.ch)

Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

  • I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.

    • SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.

      1 reply →

    • Interestingly, I think that similar types of arguments are made against "agentic coding"

      If you don't pay constant attention, you will never notice when it slips in a bug or security issue

      2 replies →

    • Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.

      8 replies →

  • To be fair, that report says

    > the self-driving feature had “aborted vehicle control less than one second prior to the first impact”

    It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.

    • It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.

      Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.

      [1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...

    • IDK, this has the same unethical energy as police turning off body cameras.

      in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.

      In the worst case, this is intentional.

      In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.

      This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.

      1 reply →

    • This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.

      3 replies →

    • The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.

      For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.

    • This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.

      2 replies →

    • So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.

      That's still not a good look.

      And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.

  • It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

    AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.

    EDIT: IIRC the threshold for disengagement is 1s.

    • >> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

      > It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

      That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.

  • Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:

    > If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]

    In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.

    If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.

    For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.

    0. https://www.tesla.com/fsd/safety#:~:text=within five seconds

Tesla has a very bad track record in terms of both compliance and disclosure when it comes to autonomy incidents.

  • Did you find the article lack any real numbers related to the claims? It was a bit weird in that that information was so vague.

    Individual tragic anecdotal incidences aside the vagueness of the article really diluted the merit of the claims.

So...For a bit of context on the video and the article:

- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).

- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.

- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).

  • Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?

    I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.

    • I've lived for many years in two large European countries and in both cases I found them hard to trust. Perhaps you have deep, first-hand knowledge of multiple European countries but in my experience they take too much money and are heavily biased. For that reason I'd prefer there to be no public broadcast companies - at least so my tax money doesn't support manipulation. In over 30+ years of life, I've never encountered a truly neutral public broadcaster in Europe, though I'm sure there may be exceptions.

      1 reply →

    • They have left leaning biases, RTVE is basically a propaganda channel for the PSOE at this point and France Info/France2 have center-left biases which makes them not neutral and representing the corpus of society. They are all well-funded though.

    • > Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?

      The quality of European publicly owned medias is highly country specific and variates quite a lot:

      - Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).

      - Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.

      But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.

    • The national broadcaster here in Romania has been politically leaning on whoever was paying the bills, hence on who’s holding political control over the country.

      I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).

One day an AI will obviously be infinitely better at driving than a human will be but that day is not yet here.

  • it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.

  • Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it

    I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage

    • > to minimize other damage

      You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.

      What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?

      I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?

      *meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids

      9 replies →

    • > not sure I would trust it

      This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.

      On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.

      2 replies →

    • What would that guarantee look like and would it be legal to sell a product that made that guarantee?

      "Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.

  • Was it 2015 when HN was full of prediction we won't be driving in five years? From what I see the serious accidents with human drivers are caused by deliberately doing the dangerous thing (in my corner of the world - mostly overtaking at the wrong place or time, or both). Besides that humans drive very safely. Outside of the tightly controlled environment I don't see self-driving getting any better till systems have a proper world-model. So, maybe never.

This is about the old autopilot, not FSD, and there doesn't seem to be anything new in the article. This is based on the same leaked data which has been public since 2023. The title seems to be inaccurate, as there's nothing to indicate that they hid fatal accidents.

Look I don't like Tesla as much as the next person, I think it is wildly over-hyped and over-valued. But this article is just slop.

The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from

It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.

Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.

  • >Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.

    This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.

    If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.

    I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.

  • After you wrote this, I went and read the article I also didn't see much there either. And wonder why you are getting down voted. And TBC, also not a tesla fan (the truck is dumb).

Hot take but I feel like Tesla owners (hell, anyone with 'autonomous driving' vehicles) need to see some kind of modern lecture based on the Children of the Magenta talk on automation dependence in aircraft. Mandatory, before you can trigger the system on.

FSD has built this generation's newest children of the magenta line.

https://www.youtube.com/watch?v=5ESJH1NLMLs

Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.

It's not like they sold us leaded gasoline or "healthy tobacco" for decades.

To pile on to this pathetic excuse for a company: anyone considering buying a Tesla should know that they are the #1 brand for fatal accidents in the United States, with over twice the accident rate of a typical automaker: https://www.roadandtrack.com/news/a62919131/tesla-has-highes...

This terrible statistic can’t just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.

I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.

The crash test safety awards their vehicles have won are clearly not representative of reality.

The self-driving system Tesla offers is only “ahead” of the competition because the competition is unwilling to sell an unsafe system.

  • Your link only suggests driver and road conditions to be blamed. Consider the amount of power coming from a base model, I would lean towards driver. What they do with FSD stats is terrible and it would be refreshing to have some unbiased looks at it. Your narrative though is too biased and the link makes no connection to Tesla being responsible for the fatalities.

  • > Tesla vehicles have a fatal crash rate of 5.6 per billion miles driven, according to the study; Kia is second with a rate of 5.5,

    Basically the same as Kia. Why are Kias so bad?

    • 2 reasons I can see.

      Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.

      Tesla is giving people driving their cars a false sense of security.

      1 reply →

    • Until recently, Kias were sub-entry level shitboxes

      This would affect both driver selection and performance during impact

      Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla

    • I am sure there is a component of safety systems in a Kia but I would bet the bigger weighting is on driver profile.

    • You’re so close to understanding!

      Tesla stans tell us that they’re the most luxurious and best-built cars on the road, in reality they’re as poorly built as an economy car brand for people who don’t want to pay for a Toyota with a reputation for low quality.

      3 replies →

  • that study was pretty thoroughly debunked. Also, I believe it was put out by a lobbying group representing auto dealerships who see the Tesla DTC model as a mortal threat. There is a lot of legitimate criticism to be directed towards Tesla but the ISeeCars study "aint it".

    • I've heard people saying the study is bad, but whenever I've asked about why the answers have been pretty bad. Do you have a good source for why we should disregard it?

    • Find a link that shows it’s debunked then? All they did was analyze federal crash data.

      I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.

      1 reply →

  • For a while they were the safest car in crash tests, weren't they? Was there an inflection point where they were dropping like a rock? Or is this a case of measuring different things (crash tests vs fatal accident rates)?

    I know you probably don't know off the top of your head, I'm hoping someone can chime in.

    • Dan Luu had some interesting analysis about car safety, comparing how different auto-makers fared on newly introduced crash tests: https://danluu.com/car-safety/

      The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.

  • I am admittedly not a fan, but I note that in my social circle I don't have anyone who considers one, one that has one wants to sell one, one vendor has one ( the truck one ), but it is clearly for marketing purposes so at least it makes sense.

  • How do we know it can't be explained by self-selecting driver population? That sounds like the most likely explanation, and it's the only explanation advanced by the article you provided.

    • I guess there's something to be said for "hey, if you're considering buying a Tesla, you may be the kind of person that's likely to kill themself in a car crash. Consider buying a safer car or taking the bus!"

      1 reply →

    • Who would have guessed that a vehicle with no turn signal stalk or physical control to shift gears is unsafe!

      Tesla sells too many vehicles for it to be a “self selecting driver population” thing anymore. They sell almost as many Model Ys as Honda CRVs.

      I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.

      Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.

    • They don’t, these are the anti-Tesla folks. No level of reasoning is available for discussions like this.

      I don’t like Elon but I also don’t think fiction and misleading stats serve anyone.

  • We're talking about a brand whose every car has at least 350HP, and most of them have more.

    It's not an apples-to-oranges comparison.

    • So why is Dodge better on the list? Most Dodge models sold are rear wheel drive performance cars. They basically only sell the Challenger/Charger and the Hornet SUV that nobody’s buying.

      The lengths people will go to defend Tesla continue to astound me. Can’t we just say that they suck without making excuses for them?

  • > I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.

    Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?

The Tesla fans fell for it again.

The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.

If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?

  • This article specifically mentions "Autopilot", not FSD. I'll call out Tesla for BS as much as the next person and I own no stock, but FSD (Supervised) is exactly what it says. There's no aspect of vehicle operation that isn't controlled by FSD, but it must be supervised.

Here we go again. Autopilot != FSD. Autopilot is not "autonomous" driving. It's lane keep with adaptive cruise control. The same system that Honda, Toyota, etc have. Yes the naming is wrong, the marketing is bad, but I don't see it as much worse as Toyota safety sense. If you use it to be "safe" you're going swerve off the highway into a ditch. I used super cruise from GM in my friends suv. As soon as lane markers go away on a bridge, I almost hit the railing.

I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.

  • Can you explain why that makes it ok to cover up accidents and lie about the recordings of the event being corrupted?

  • The news isn't necessarily of the effectiveness of the particular tech stack, but the integrity, or lack thereof of the manufacturer in reporting incidents. If that is in question, assessing the effectiveness of any of Tesla's tech stacks fsd or autonomy, or taxis for driving is in doubt.

  • I don't get it?

    If autopilot was missleading, full self driving is too?

    • Autopilot is completely different software from FSD. If you think FSD is stupid then Autopilot is worse because it won't do anything other than stay in the same lane and adjust speed to the car in front of you.

      For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).

    • The difference is FSD is properly annotated as (Supervised) and does exactly that. Autopilot does not 'autopilot' the vehicle by any reasonable measure.

      2 replies →

  • How about the fact that Tesla is killing people and covering it up?

    Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?

    What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.

    • It's not a semantic issue, FSD is a completely different system, but many people mix up the terms when discussing these systems due to poor naming. Autopilot is just cruise control and lane keep. FSD handles navigation and full vehicle control. Articles discussing the dangers of Autopilot are making perfectly reasonable claims about a system which was poorly named/marketed, but they are not meaningfully relevant to conversations about FSD.

  • IMHO you're shifting goal posts (and I am not downvoting).

    Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].

    It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.

    [1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.

    • "Autopilot" is not $8K, that's FSD. Autopilot was the default cruise control/lane keep software and was renamed "Traffic Aware Cruise Control" a few months ago. The original name was ridiculously misleading.