← Back to context

Comment by p-e-w

6 hours ago

I don’t agree with that definition of “hallucination”, for starters.

So substitute another phrase, if you prefer. It doesn't change the logic.

"Specifically, we define a formal world where bungling is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably bungle if used as general problem solvers."

  • Their diagonalization argument applies to any system that uses finite training data. Calling such a system "LLM" is an (unintentional) red herring.