← Back to context

Comment by startupsfail

3 days ago

Below is the worst quote... It is plain wrong to see an LLM as a bags of words. LLMs pre-trained on large datasets of text are world models. LLMs post-trained with RL are RL-agents that use these modeling capabilities.

> We are in dire need of a better metaphor. Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.

LLMs aren't world models, they are language models. It will be interesting to see which of the LLM implementation techniques will be useful in building world models, but that's not what we have now.

  • Can you give an example of some part of the physical world or infosphere that an LLM can't model, at least approximately?

When you see a dog, or describe the entity, do you discuss the genetic makeup or the bone structure?

No, you describe the bark.

The end result is what counts. Training or not, it's just spewing predictive, relational text.

  • So do we, but that's helpful.

    • " Training or not, it's just spewing predictive, relational text."

      If you're responding to that, "so do we" is not accurate.

      We're not spewing predictive, relational text. We're communicating, after thought, and the output is meant to communicate something specifically.

      With AI, it's not trying to communicate an idea. It's just spewing predictive text. There's no thought to it. At all.