Comment by ptidhomme

2 months ago

Now of course, the only input LLMs have is human text (for text only LLMs anyway). So their model is entirely dependent on how we see the world. I wouldn't restrict LLMs to description of human understanding. They can articulate concepts in a rather sensible way, that wouldn't exist as is in the training corpus. Which exactly means that they have a model, however limited or imperfect.

"they can articulate concepts.. that [don't exist] in the training corpus" yes, but that doesn't necessarily mean they have a model [of the world]. You might want to say they are articulating the plausible (that is something that fits with our model of the world) but I think they are producing plausible articulations that we interpret against our model.