Comment by bigstrat2003

2 days ago

> Because it doesn’t have to be as accurate as a human to be a helpful tool.

I disagree. If something can't be as accurate as a (good) human, then it's useless to me. I'll just ask the human instead, because I know that the human is going to be worth listening to.

Autopilot in airplanes is a good example to disprove that.

Good in most conditions. Not as good as a human. Which is why we still have skilled pilots flying planes, assisted by autopilot.

We don’t say “it’s not as good as a human, so stuff it.”

We say, “it’s great in most conditions. And humans are trained how to leverage it effectively and trained to fly when it cannot be used.”

  • That's a downright insane comparison. The whole problem with generative AI is how extremely unreliable it is. You cannot really trust it with anything because irrespective of its average performance, it has absolutely zero guarantees on its worst-case behavior.

    Aviation autopilot systems are the complete opposite. They are arguably the most reliable computer-based systems ever created. While they cannot fly a plane alone, pilots can trust them blindly to do specific, known tasks consistently well in over 99.99999% of cases, and provide clear diagnostics in case they cannot.

    If gen AI agents were this consistently good at anything, this discussion would not be happening.

  • The autopilots in aircraft have predictable behaviors based on the data and inputs available to them.

    This can still be problematic! If sensors are feeding the autopilot bad data, the autopilot may do the wrong thing for a situation. Likewise, if the pilot(s) do not understand the autopilot's behaviors, they may misuse the autopilot, or take actions that interfere with the autopilot's operation.

    Generative AI has unpredictable results. You cannot make confident statements like "if inputs X, Y, and Z are at these values, the system will always produce this set of outputs".

    In the very short timeline of reacting to a critical mid-flight situation, confidence in the behavior of the systems is critical. A lot of plane crashes have "the pilot didn't understand what the automation was doing" as a significant contributing factor. We get enough of that from lack of training, differences between aircraft manufacturers, and plain old human fallibility. We don't need to introduce a randomized source of opportunities for the pilots to not understand what the automation is doing.

    • But now it seems like the argument has shifted.

      It started out as, "AI can make more errors than a human. Therefore, it is not useful to humans." Which I disagreed with.

      But now it seems like the argument is, "AI is not useful to humans because its output is non-deterministic?" Is that an accurate representation of what you're saying?

      2 replies →