← Back to context

Comment by coldtea

7 hours ago

As a general architecture, an LLM also has limitations that can't be improved unless we switch to another, fundamentally different AI design that's non LLM based.

There are also limitations due to maths and/or physics that aren't fixable under any design. Outside science fiction, there is no technology whose limitations are all fixable.

Here's one: https://arxiv.org/abs/2401.11817?utm_source=chatgpt.com

Am I misreading that paper? They define hallucinations as anything other than the correct answer and prove that there are infinitely many questions an LLM can't answer correctly, but that's true of any architecture- there are infinitely many problems a team of geniuses with supercomputers can't answer. If an LLM can be made to reliably say "I don't know" when it doesn't, hallucinations are solved- they contend that this doesn't matter because you can keep drawing from your pile of infinite unanswerable questions and the LLM will either never answer or will make something up. Seems like a technically true result that isn't usefully true.