← Back to context

Comment by jasoncartwright

4 hours ago

Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.

  • SAE level 2 is just a bad idea. People can't be expected to carefully monitor a car and take over at a moment's notice when it's doing all the driving. My adaptive cruise control is great and I hope to have a future car where I can zone out while it drives and take over after after a few seconds heads up, but the zone between shouldn't be a valid feature.

  • Interestingly, I think that similar types of arguments are made against "agentic coding"

    If you don't pay constant attention, you will never notice when it slips in a bug or security issue

    • Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.

  • Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.

    • Reminds me of a situation not long ago.

      I’m in left lane on highway. Tesla ahead of me but quite a ways away.

      I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.

      So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.

      To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.

      3 replies →

    • Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?

      I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)

      5 replies →

    • Which is just worse.

      When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.

      Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.

      4 replies →

  • A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.

    • I don't really buy that. There are a lot of situations (e.g. being directed to park in a space at a fairgrounds, ski area, or whatever) that you can't reasonably expect AFAIK to be programmed into a car's computer. Even if a car can legitimately handle roads under most circumstances, they're not going to be able to handle everything.

      2 replies →

    • Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.

      3 replies →

    • How do you reverse such a car into your own driveway that's positioned in a funny way at an angle and an incline? What if you're parking off road for any reason? Like, you have to be able to manoeuvre your own vehicle sometimes.

      1 reply →

To be fair, that report says

> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”

It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.

  • It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.

    Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.

    [1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...

  • IDK, this has the same unethical energy as police turning off body cameras.

    in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.

    In the worst case, this is intentional.

    In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.

    This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.

    • > the "right thing to do" is NOT turn off the cameras just before a collision

      Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)

  • This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.

    • Yep, your comment reminds me of a time my mother was about to hit a bird in the road. However, she was too busy arguing with the passenger to notice, and her driving was starting to become erratic already. I decided not to tell her because I knew that the shock could cause her do something more drastic like crash the car to try and avoid it.

    • I guess i'll step in for the counter.

      How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?

      3 replies →

  • The few Tesla post-mortems I’ve read early on stated that FSD turned off before impact and used this as a defence to their system. If they shared that this happened 1 second before impact (so far too late for a human to respond), I’d have sympathy. I have never read a Tesla statement that contained this information.

    For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.

  • This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.

    • Agreed. I also feel like there is a world of difference between the driver deliberately assuming control at the last second because they notice that an accident is about to happen, and the car itself yielding control unprompted because it thinks an accident is about to happen.

      The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.

      1 reply →

  • So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.

    That's still not a good look.

    And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.

It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.

EDIT: IIRC the threshold for disengagement is 1s.

  • >> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

    > It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

    That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.

    • I'm just stating what I remember, I'm not trying to defend Tesla.

      For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.

      2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc

Disregarding the fact that NHTSA findings apparently contradict it (though that may just be a more recent change than the 2022 report), Tesla claims to use five seconds before a collision event as the threshold for their data reporting on their FSD marketing page:

> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]

In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.

If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.

For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.

0. https://www.tesla.com/fsd/safety#:~:text=within five seconds