Comment by HarHarVeryFunny

3 months ago

Right - I've no idea how LeCun thinks about it, but I don't see that an animal needs or would have any more of a "world model" than something like an LLM. I'm sure all the research into rats in mazes etc has something to say about their representations of location/etc, but given a goal of prediction it seems that all is needed is a combination of pattern recognition and sequence prediction - not an actual explicit "declarative" model.

It seems that things like place cells and grandmother cells are a part of the pattern recognition component, but recognizing landmarks and other predictive-relevant information doesn't mean we have a complete coherent model of the environments we experience - perhaps more likely a fragmented one of task-relevant memories. It seems like our subjective experience of driving is informative - we don't have a mental road map but rather familiarity with specific routes and landmarks. We know to turn right at the gas station, etc.