← Back to context

Comment by aurareturn

18 days ago

So do we expect real world models to just regurgitate new facts from their training data?

Regurgitating facts kind of assumes it is a language model, as you're assuming a language interface. I would assume a real "world model" or digital twin to be able to reliably model relationships between phenomena in whatever context is being modeled. Validation would probably require experts in whatever thing is being modeled to confirm that the model captures phenomena to some standard of fidelity. Not sure if that's regurgitating facts to you -- it isn't to me.

But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.

  •   But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.
    

    You said this:

      If this existing corpus includes useful information it can regurgitate that.It cannot, however, synthesize new facts by combining information from this corpus.
    

    So I'm wondering if you think world models can synthesize new facts.

    • A world model can be used to learn something about the real system. I said synthesize because in the context that LLM's work in (using a corpus to generate sentences) that is what that would look like.

      3 replies →