Comment by tovej
18 days ago
Regurgitating facts kind of assumes it is a language model, as you're assuming a language interface. I would assume a real "world model" or digital twin to be able to reliably model relationships between phenomena in whatever context is being modeled. Validation would probably require experts in whatever thing is being modeled to confirm that the model captures phenomena to some standard of fidelity. Not sure if that's regurgitating facts to you -- it isn't to me.
But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.
You said this:
So I'm wondering if you think world models can synthesize new facts.
A world model can be used to learn something about the real system. I said synthesize because in the context that LLM's work in (using a corpus to generate sentences) that is what that would look like.
Why can’t an LLM run experiments to synthesize new facts?
2 replies →