Comment by aurareturn
18 days ago
It cannot, however, synthesize new facts by combining information from this corpus.
Are we sure? Why can't the LLM use tools, run experiments, and create new facts like humans?
18 days ago
It cannot, however, synthesize new facts by combining information from this corpus.
Are we sure? Why can't the LLM use tools, run experiments, and create new facts like humans?
Then the LLM is not actually modelling the world, but using other tools that do.
The LLM is not the main component in such a system.
So do we expect real world models to just regurgitate new facts from their training data?
Regurgitating facts kind of assumes it is a language model, as you're assuming a language interface. I would assume a real "world model" or digital twin to be able to reliably model relationships between phenomena in whatever context is being modeled. Validation would probably require experts in whatever thing is being modeled to confirm that the model captures phenomena to some standard of fidelity. Not sure if that's regurgitating facts to you -- it isn't to me.
But I don't know what you're asking exactly. Maybe you could specify what it is you mean by "real world model" and what you take fact-regurgitating to mean.
5 replies →