← Back to context

Comment by amputect

2 years ago

That's correct. LLMs are plausible sentence generators, they don't "understand"* their runtime environment (or any of their other input) and they're not qualified to answer your questions. The companies providing these LLMs to users will typically provide a qualification along these lines, because LLMs tend to make up ("hallucinate", in the industry vernacular) outputs that are plausibly similar to the input text, even if they are wildly and obviously wrong and complete nonsense to boot.

Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.

*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.

**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.

If you can watch the video demo of this release, or for that matter the Attenborough video, and still claim that these things lack any form of "understanding," then your imagination is either a lot weaker than mine, or a lot stronger.

Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.

  • Have you considered that mankind simply trained itself on the wrong criteria on detecting understanding?

    Why are people so eager to believe that electric rocks can think?

    • Why are people so eager to believe that people can? When it comes to the definitions of concepts like sentience, consciousness, thinking and understanding, we literally don't know what we're talking about.

      It's premature in the extreme to point at something that behaves so much like we do ourselves and claim that whatever it's doing, it's not "understanding" anything.

      4 replies →

That's not entirely accurate.

LLMs encode some level of understanding of their training set.

Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.

* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.

> plausible sentence generators, they don't "understand"* their runtime environment

Exactly like humans dont understand how their brain works

  • We've put an awfully lot of effort into figuring that out, and have some answers. Much of the problems in exploring the brain are ethical because people tend to die or suffer greatly if we experiment on them.

    Unlike LLMs, which are built by humans and have literal source code and manuals and SOPs and shit. Their very "body" is a well-documented digital machine. An LLM trying to figure itself out has MUCH less trouble than a human figuring itself out.