Comment by astrange

3 months ago

It doesn't necessarily need to model the world to learn how to perform actions though. That was the topic of this old GOFAI research:

https://aaai.org/papers/00268-aaai87-048-pengi-an-implementa...

It instead works by "doing the thing that worked last time".

As an example, you don't usually need to know what is in your garbage in order to take out the trash.

Right - I've no idea how LeCun thinks about it, but I don't see that an animal needs or would have any more of a "world model" than something like an LLM. I'm sure all the research into rats in mazes etc has something to say about their representations of location/etc, but given a goal of prediction it seems that all is needed is a combination of pattern recognition and sequence prediction - not an actual explicit "declarative" model.

It seems that things like place cells and grandmother cells are a part of the pattern recognition component, but recognizing landmarks and other predictive-relevant information doesn't mean we have a complete coherent model of the environments we experience - perhaps more likely a fragmented one of task-relevant memories. It seems like our subjective experience of driving is informative - we don't have a mental road map but rather familiarity with specific routes and landmarks. We know to turn right at the gas station, etc.