Comment by ACCount37
2 days ago
Words are the "simplistic" projection of an LLM's abstract thoughts.
An LLM has: words in its input plane, words in its output plane, and A LOT of cross-linked internals between the two.
Those internals aren't "words" at all - and it's where most of the "action" happens. It's how LLMs can do things like translate from language to language, or recall knowledge they only encountered in English in the training data while speaking German.
> It's how LLMs can do things like translate from language to language
The heavy lifting here is done by embeddings. This does not require a world model or “thought”.
LLMs are compression and prediction. The most efficient way to (lossfully) compress most things is by actually understanding them. Not saying LLMs are doing a good job of that, but that is the fundamental mechanism here.
Where’s the proof that efficient compression results in “understanding”? Is there a rigorous model or theorem, or did you just make this up?
2 replies →
The "cross-linked internals" only go one direction and only one token at a time, slide window and repeat. The RL layer then picks which few sequences of words are best based on human feedback in a single step. Even "thinking" is just doing this in a loop with a "think" token. It is such a ridiculously simplistic model that it is vastly closer to an adder than a human brain.