← Back to context

Comment by perfmode

3 months ago

People... employees... friends... lovers... "hallucinate" too.

What rational agent is infallible?

LLMs don’t actually “hallucinate”. Hallucination would be the LLM starting from a different context than was actually given to it. Given the fidelity of electronic transmission these days, that’s probably never an issue.

LLMs also have no grounding in abstract concepts such as true and false. This means they output things stochastically rather than logically. Sometimes people are illogical, but people are also durable, learn over time, and learn extremely quickly. Current LLMs only learn once, so they easily get stuck in loops and pitfalls when they produce output that makes no sense to the human reader. The LLM can’t “understand” that the output makes no sense because it doesn’t “understand” anything in the sense that humans understand things.

No one is claiming that any agent is or should be considered infallible, but for every other form of technology humans create - including software - there is a minimal standard of predictability and efficiency that is considered acceptable for that software to be useful.

For some reason, LLMs are the exception. It doesn't matter how much they hallucinate, confabulate, what have you, someone will always, almost reflexively, dismiss any criticism as irrelevant because "humans do the same thing." Even though human beings who hallucinate as often as LLMs do would be committed to asylums.

In general terms, the more mission critical a technology is, the more reliable it needs to be. Given that we appear to intend to integrate LLMs into every aspect of human society as aggressively as possible, I don't believe it's unreasonable to expect it to be more reliable than a sociopathic dementia patient with Munchausen's syndrome.

But that's just me. I don't look forward to the future when my prescriptions are written by software agents that tend to make up illnesses and symptoms and filled out by software agents that cant' do basic math, and it's all considered ok because the premise that humans would always be as bad or worse and shouldn't be trusted with even basic autonomy has become so normalized we just accept the abuse of the unstable technologies rolled out to deprecate us from society as inevitable. Apparently that just makes me a Luddite. IDK.