← Back to context

Comment by tovej

19 days ago

Then the LLM is not actually modelling the world, but using other tools that do.

The LLM is not the main component in such a system.

So do we expect real world models to just regurgitate new facts from their training data?

  • Regurgitating facts kind of assumes it is a language model, as you're assuming a language interface. I would assume a real "world model" or digital twin to be able to reliably model relationships between phenomena in whatever context is being modeled. Validation would probably require experts in whatever thing is being modeled to confirm that the model captures phenomena to some standard of fidelity. Not sure if that's regurgitating facts to you -- it isn't to me.

    But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.

    •   But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.
      

      You said this:

        If this existing corpus includes useful information it can regurgitate that.It cannot, however, synthesize new facts by combining information from this corpus.
      

      So I'm wondering if you think world models can synthesize new facts.

      4 replies →