← Back to context

Comment by Hendrikto

2 days ago

> It's how LLMs can do things like translate from language to language

The heavy lifting here is done by embeddings. This does not require a world model or “thought”.

LLMs are compression and prediction. The most efficient way to (lossfully) compress most things is by actually understanding them. Not saying LLMs are doing a good job of that, but that is the fundamental mechanism here.

  • Where’s the proof that efficient compression results in “understanding”? Is there a rigorous model or theorem, or did you just make this up?

    • It's the other way around. Human learning would appear to amount to very efficient compression. A world model would appear to be a particular sort of highly compressed data set that has particular properties.

      This is a case where it's going to be next to impossible to provide proof that no counterexamples exist. Conversely, if what I've written there is wrong then a single counterexample will likely suffice to blow the entire thing out of the water.

    • No answer I give will be satisfying to you until I could come up with a rigorous mathematical definition of understanding, which is de-facto solving the hard AI problem. So there's not really point in talking about it is there?

      If you're interested in why compression is like understanding in many ways, I'd suggest reading through the wikipedia article on Kolmogorov complexity.

      https://en.wikipedia.org/wiki/Kolmogorov_complexity