If its both engines you're fucked anyway if its shortly after takeoff.
But I'm an advocate of KISS. At a certain point you have to trust the pilot is not going to something extremely stupid/suicidal. Making overly complex systems to try to protect pilots from themselves leads to even worse issues, such as the faulty software in the Boeing 737-MAX.
Was thinking this same thing. A minute feels like a long time to us (using a Garmin as the example said) but a decent number of airplane accidents only take a couple minutes end to end between everything being fine and the crash. Building an insulation layer between the machine and the experts who are supposed to be flying it only makes it less safe by reducing control.
Proposed algorithm: If the flight computer thinks the engine looks "normal", then blare an alarm for x seconds before cutting the fuel.
I wonder if there have been cases where a pilot had to cut fuel before the computer could detect anything abnormal? I do realize that defining "abnormal" is the hardest part of this algorithm.
The incident with Sully landing in the Hudson is an interesting one related to this. They had a dual birdstrike and both engines were totally obliterated and had no thrust at all, but it came up later in the hearing that the computer data showed that one engine still had thrust due to a faulty sensor, so that type of sensor input can't really be trusted in a true emergency/edge case, especially if a sensor malfunctions while an engine is on fire or something.
As a software engineer myself I think it's interesting that we feel software is the true solution when we wouldn't accept that solution ourselves. For example typically in a company you do code reviews and have a release gating process but also there's some exception process for quickly committing code or making adjustments when theres an outage or something. Could you imagine if the system said "hey we aren't detecting an outage, you sure about that? why don't you go take a walk and get a coffee, if you still think there's an outage in 15 minutes from now we will let you make that critical change".
If the computer could tell perfectly whether the engine “looks normal” or not, there wouldn’t be any need for a switch. If it can’t, the switch most likely needs to work without delay in at least some situations.
In safety-critical engineering, you generally either automate things fully (i.e. to exceed human capabilities in all situations, not just most), or you keep them manual. Half-measures of automation kill people.
But humans can't tell perfectly either and would be responding to much of the same data that automation would be.
I wonder if they could have buttons that are about the situation rather than the technical action. Have a fire response button. Or a shut down on the ground button.
But it does seem like half measure automation could be a contributing factor in a lot of crashes. Reverting to a pilot in a stressful situation is a risk, as is placing too much faith in individual sensors. And in a sense this problem applies to planes internally or to the whole air traffic system. It is a mess of expiring data being consumed and produced by a mix of humans and machines. Maybe the missing part is good statistical modelling of that. If systems can make better predictions they can be more cautious in response.
If the warning period is short enough is it possible it's always beneficial or is 2-3 seconds of additional fuel during a undetected fire more dangerous?
If its both engines you're fucked anyway if its shortly after takeoff.
But I'm an advocate of KISS. At a certain point you have to trust the pilot is not going to something extremely stupid/suicidal. Making overly complex systems to try to protect pilots from themselves leads to even worse issues, such as the faulty software in the Boeing 737-MAX.
Was thinking this same thing. A minute feels like a long time to us (using a Garmin as the example said) but a decent number of airplane accidents only take a couple minutes end to end between everything being fine and the crash. Building an insulation layer between the machine and the experts who are supposed to be flying it only makes it less safe by reducing control.
Proposed algorithm: If the flight computer thinks the engine looks "normal", then blare an alarm for x seconds before cutting the fuel.
I wonder if there have been cases where a pilot had to cut fuel before the computer could detect anything abnormal? I do realize that defining "abnormal" is the hardest part of this algorithm.
The incident with Sully landing in the Hudson is an interesting one related to this. They had a dual birdstrike and both engines were totally obliterated and had no thrust at all, but it came up later in the hearing that the computer data showed that one engine still had thrust due to a faulty sensor, so that type of sensor input can't really be trusted in a true emergency/edge case, especially if a sensor malfunctions while an engine is on fire or something.
As a software engineer myself I think it's interesting that we feel software is the true solution when we wouldn't accept that solution ourselves. For example typically in a company you do code reviews and have a release gating process but also there's some exception process for quickly committing code or making adjustments when theres an outage or something. Could you imagine if the system said "hey we aren't detecting an outage, you sure about that? why don't you go take a walk and get a coffee, if you still think there's an outage in 15 minutes from now we will let you make that critical change".
If the computer could tell perfectly whether the engine “looks normal” or not, there wouldn’t be any need for a switch. If it can’t, the switch most likely needs to work without delay in at least some situations.
In safety-critical engineering, you generally either automate things fully (i.e. to exceed human capabilities in all situations, not just most), or you keep them manual. Half-measures of automation kill people.
But humans can't tell perfectly either and would be responding to much of the same data that automation would be.
I wonder if they could have buttons that are about the situation rather than the technical action. Have a fire response button. Or a shut down on the ground button.
But it does seem like half measure automation could be a contributing factor in a lot of crashes. Reverting to a pilot in a stressful situation is a risk, as is placing too much faith in individual sensors. And in a sense this problem applies to planes internally or to the whole air traffic system. It is a mess of expiring data being consumed and produced by a mix of humans and machines. Maybe the missing part is good statistical modelling of that. If systems can make better predictions they can be more cautious in response.
If the warning period is short enough is it possible it's always beneficial or is 2-3 seconds of additional fuel during a undetected fire more dangerous?
If engine_status == normal and last_activation greater than threshold time
Else Shut off immediately End
Override warning time by toggling again.