← Back to context

Comment by sebmellen

15 hours ago

Can you explain this “world model” concept to me? How do you actually interface with a model like this?

One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.

Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.

  • Learning Algorithm Of Biological Networks

    https://www.youtube.com/watch?v=l-OLgbdZ3kk

    In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.

    Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.

  • Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.

    The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.

  • So... that seems like possible path towards AGI. Doesn't it?

    • Only if you also provide it with a way for it to richly interact with the world (i.e. an embodiment). Otherwise, how do you train it? How does a world model verify the correctness of its model in novel situations?

The best world model research I know of today is Dreamer 4: https://danijar.com/project/dreamer4/. Here is an interesting interview with the author: https://www.talkrl.com/episodes/danijar-hafner-on-dreamer-v4

Training on 2,500 hours of prerecorded video of people playing Minecraft, they produce a neural net world model of Minecraft. It is basically a learned Minecraft simulator. You can actually play Minecraft in it, in real time.

They then train a neural net agent to play Minecraft and achieve specific goals all the way up to obtaining diamonds. But the agent never plays the real game of Minecraft during training. It only plays in the world model. The agent is trained in its own imagination. Of course this is why it is called Dreamer.

The advantage of this is that once you have a world model, no extra real data is required to train agents. The only input to the system is a relatively small dataset of prerecorded video of people playing Minecraft, and the output is an agent that can achieve specific goals in the world. Traditionally this would require many orders of magnitude more real data to achieve, and the real data would need to be focused on the specific goals you want the agent to achieve. World models are a great way to cheaply amplify a small amount of undifferentiated real data into a large amount of goal-directed synthetic data.

Now, Minecraft itself is already a world model that is cheap to run, so a learned world model of Minecraft may not seem that useful. Minecraft is just a testbed. World models are very appealing for domains where it is expensive to gather real data, like robotics. I recommend listening to the interview above if you want to know more.

World models can also be useful in and of themselves, as games that you can play, or to generate videos. But I think their most important application will be in training agents.

A world model is a persistent representation of the world (however compressed) that is available to an AI for accessing and compute. For example, a weather world model would likely include things like wind speed, surface temperature, various atmospheric layers, total precipitable water, etc. Now suppose we provide a real time live feed to an AI like an LLM, allowing the LLM to have constant, up to date weather knowledge that it loads into context for every new query. This LLM should have a leg up in predictive power.

Some world models can also be updated by their respective AI agents, e.g. "I, Mr. Bot, have moved the ice cream into the freezer from the car" (thereby updating the state of freezer and car, by transferring ice cream from one to the other, and making that the context for future interactions).

  • If your "world model" only models a small portion of the world, I think the more appropriate label is a time-series model. Once you truncate correlated data, the model you're left with isn't very worldly at all.

    • You don't need to load the entire world model in order to be effective at a task. LLM providers already do something similarly described with model routing.

He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world

  • Isn't this idea demonstrably false due to the existence of various sensory disorders too?

    I have a disorder characterised by the brain failing to filter own its own sensory noise, my vision is full of analogue TV-like distortion and other artefacts. Sometimes when it's bad I can see my brain constructing an image in real time rather than this perception happening instantaneously, particularly when I'm out walking. A deer becomes a bundle of sticks becomes a muddy pile of rocks (what it actually is) for example over the space of seconds. This to me is pretty strong evidence we do not experience reality directly, and instead construct our perceptions predictively from whatever is to hand.

    • The default philosophical position for human biology and psychology is known as Representational Realism. That is, reality as we know it is mediated by changes and transformations made to sensory (and other) input data in a complex process, and is changed sufficiently to be something "different enough" from what we know to be actually real.

      Direct Realism is the idea that reality is directly available to us and any intermediate transformations made by our brains is not enough to change the dial.

      Direct Realism has long been refuted. There are a number of examples, e.g. the hot and cold bucket; the straw in a glass; rainbows and other epiphenomena, etc.

    • Pleased to meet someone else who suffers from "visual snow". I'm fortunate in that like my tinnitus, I'm only acutely aware of it when I'm reminded of it, or, less frequently, when it's more pronounced.

      You're quite correct that our "reality" is in part constructed. The Flashed Face Distortion Effect [0][1] (wherein faces in the peripheral vision appear distorted due the the brain filling in the missing information with what was there previously) is just one example.

      [0] https://en.wikipedia.org/wiki/Flashed_face_distortion_effect [1] https://www.nature.com/articles/s41598-018-37991-9

      5 replies →

  • Whatever idea yann has of JEPA and its supposed superiority compared to LLMs, he doesn't seem to have done a good job of "selling it" without resorting to strawmanning LLMs. From what little I gathered (which may be wrong), his objection to LLMs is something like the "predict next token" inductive bias is too weak for models to be able to meaningfully learn models of things like physics, sufficient to properly predict motion and do well on physical reasoning tasks.

  • the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI

    your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted

    but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)

    anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits

  • And LLMs are trained on the humans trying to describe all of this through text. The point is not if humans have a true experience of reality, it’s that human writings are a poor descriptor of reality anyway, and so LLMs cannot be a stepping stone.

  • The world model of a language model is a ... language model. Imagine the mind of a blind limbless person, locked in a cell their whole life, never having experienced anything different, who just listens all day to a piped in feed of randomized snippets of WikiPedia, 4chan and math olypiad problems.

    The mental model this person has of this feed of words is what an LLM at best has (but human model likely much richer since they have a brain, not just a transformer). No real-world experience or grounding, therefore no real-world model. The only model they have is of the world they have experience with - a world of words.

  • > humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal

    Is he advocating for philosophical idealism of the mind or does he has an alternate physicalist theory?

    • I don't think he actually understands direct realism, idealism, or representational realism as distinctions whatsoever.

  • That way he may get a very good lizard. Getting Einstein though takes layers of abstraction.

    My thinking is that such world models should be integrated with LLM like the lower levels of perception are integrated with higher brain function.

The way I think of it (might be wrong) but basically a model that has similar sensors to humans (eyes, ears) and has action-oriented outputs with some objective function (a goal to optimize against). I think autopilot is the closest to world models in that they have eyes, they have ability to interact with the world (go different directions) and see the response.