Comment by krapp

17 hours ago

No one is claiming that any agent is or should be considered infallible, but for every other form of technology humans create - including software - there is a minimal standard of predictability and efficiency that is considered acceptable for that software to be useful.

For some reason, LLMs are the exception. It doesn't matter how much they hallucinate, confabulate, what have you, someone will always, almost reflexively, dismiss any criticism as irrelevant because "humans do the same thing." Even though human beings who hallucinate as often as LLMs do would be committed to asylums.

In general terms, the more mission critical a technology is, the more reliable it needs to be. Given that we appear to intend to integrate LLMs into every aspect of human society as aggressively as possible, I don't believe it's unreasonable to expect it to be more reliable than a sociopathic dementia patient with Munchausen's syndrome.

But that's just me. I don't look forward to the future when my prescriptions are written by software agents that tend to make up illnesses and symptoms and filled out by software agents that cant' do basic math, and it's all considered ok because the premise that humans would always be as bad or worse and shouldn't be trusted with even basic autonomy has become so normalized we just accept the abuse of the unstable technologies rolled out to deprecate us from society as inevitable. Apparently that just makes me a Luddite. IDK.