Comment by zahlman

5 months ago

> I saw it somewhere else recently, but the idea is that LLMs are language models, not world models.

Part of what distinguishes humans from artificial "intelligence" to me is exactly that we automatically develop models of whatever is needed.

I think it's interesting to think about, and still somewhat uncertain:

* How much a large language model is effectively a world model (indeed, language tries to model the world...)?

* How much do humans use language in their modeling and reasoning about the world?

* How fit is language for this task, beyond the extent humans use it for?