Comment by zlg_codes
2 years ago
How does this not extend to ALL output from an LLM? If it can't understand its own runtime environment, it's not qualified to answer my questions.
2 years ago
How does this not extend to ALL output from an LLM? If it can't understand its own runtime environment, it's not qualified to answer my questions.
That's correct. LLMs are plausible sentence generators, they don't "understand"* their runtime environment (or any of their other input) and they're not qualified to answer your questions. The companies providing these LLMs to users will typically provide a qualification along these lines, because LLMs tend to make up ("hallucinate", in the industry vernacular) outputs that are plausibly similar to the input text, even if they are wildly and obviously wrong and complete nonsense to boot.
Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.
*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.
**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.
If you can watch the video demo of this release, or for that matter the Attenborough video, and still claim that these things lack any form of "understanding," then your imagination is either a lot weaker than mine, or a lot stronger.
Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.
Have you considered that mankind simply trained itself on the wrong criteria on detecting understanding?
Why are people so eager to believe that electric rocks can think?
5 replies →
That's not entirely accurate.
LLMs encode some level of understanding of their training set.
Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.
* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.
> plausible sentence generators, they don't "understand"* their runtime environment
Exactly like humans dont understand how their brain works
We've put an awfully lot of effort into figuring that out, and have some answers. Much of the problems in exploring the brain are ethical because people tend to die or suffer greatly if we experiment on them.
Unlike LLMs, which are built by humans and have literal source code and manuals and SOPs and shit. Their very "body" is a well-documented digital machine. An LLM trying to figure itself out has MUCH less trouble than a human figuring itself out.
How many books has your brain been trained with? Can you answer accurately?
There are reasons that humans can't report how many books they've read: they simply don't know and didn't measure. There is no such limitation for an LLM to understand where its knowledge came from, and to sum it. Unless you're telling me a computer can't count references.
Also, why are we comparing humans and LLMs when the latter doesn't come anywhere close to how we think, and is working with different limitations?
The 'knowledge' of an LLM is in a filesystem and can be queried, studied, exported, etc. The knowledge of a human being is encoded in neurons and other wetware that lacks simple binary chips to do dedicated work. Decidedly less accessible than coreutils.
Imagine for just a second that the ability for computers to count “references” has no bearing on this, there is a limitation and that LLMs suffer from the same issue as you do.
1 reply →