Comment by phailhaus
21 hours ago
I have no idea why Google is wasting their time with this. Trying to hallucinate an entire world is a dead-end. There will never be enough predictability in the output for it to be cohesive in any meaningful way, by design. Why are they not training models to help write games instead? You wouldn't have to worry about permanence and consistency at all, since they would be enforced by the code, like all games today.
Look at how much prompting it takes to vibe code a prototype. And they want us to think we'll be able to prompt a whole world?
This was a common argument against LLMs, that the space of possible next tokens is so vast that eventually a long enough sequence will necessarily decay into nonsense, or at least that compounding error will have the same effect.
Problem is, that's not what we've observed to happen as these models get better. In reality there is some metaphysical coarse-grained substrate of physics/semantics/whatever[1] which these models can apparently construct for themselves in pursuit of ~whatever~ goal they're after.
The initially stated position, and your position: "trying to hallucinate an entire world is a dead-end", is a sort of maximally-pessimistic 'the universe is maximally-irreducible' claim.
The truth is much much more complicated.
[1] https://www.arxiv.org/abs/2512.03750
And going back a little further, it was thought that backpropagation would be impractical, and trying to train neural networks was a dead end. Then people tried it and it worked just fine.
> Problem is, that's not what we've observed to happen as these models get better
Eh? Context rot is extremely well known. The longer you let the context grow, the worse LLMs perform. Many coding agents will pre-emptively compact the context or force you to start a new session altogether because of this. For Genie to create a consistent world, it needs to maintain context of everything, forever. No matter how good it gets, there will always be a limit. This is not a problem if you use a game engine and code it up instead.
The models, not the context. When it comes to weights, "quantity has a quality all its own" doesn't even begin to describe what happens.
Once you hit a billion or so parameters, rocks suddenly start to think.
1 reply →
Imo they explain pretty well what they are trying to achieve with SIMA and Genie in the Google Deepmind Podcast[1]. They see it as the way to get to AGI by letting AI agents learn for themselves in simulated worlds. Kind of like how they let AlphaGo train for Go in an enormous amount of simulated games.
[1] https://youtu.be/n5x6yXDj0uo
That makes even less sense, because an AI agent cannot learn effectively from a hallucinated world without internal consistency guarantees. It's an even stronger case for leveraging standard game engines instead.
"I need to go to the kitchen, but the door is closed. Easy. I'll turn around and wait for 60 seconds." -AI agent trained in this kind of world
If that's the goal, the technology for how these agents "learn" would be the most interesting one, even more than the demos in the link.
LLMs can barely remember the coding style I keep asking it to stick to despite numerous prompts, stuffing that guideline into my (whatever is the newest flavour of product-specific markdown file). They keep expanding the context window to work around that problem.
If they have something for long-term learning and growth that can help AI agents, they should be leveraging it for competitive advantage.
Take the positive spin. What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns. I mean there's a lot of useful applications for it. I don't like the framing at this time, but I also get where it's going. The engineer in me is drawn to it, but the Muslim in me is very scared to hear anyone talk about creating worlds.... But again I have to separate my view from the reality that this could have very positive real world benefits when you can simulate scenarios. So I could put in a 2 pager or 10 page scenario that gets played out or simulated and allow me to walk through it. Not just predictive stuff but let's say things that have happened so I can map crime scenes or anything. In the end this performance art is because they are a product company being Benchmarked by wall street and they'll need customers for the technology but at the same time they probably already have uses for it internally.
> What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns.
This is only a useful premise if it can do any of those things accurately, as opposed to dreaming up something kinda plausible based on an amalgamation of every vaguely related YouTube video.
> What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns.
What's the use? Current scientific models clearly showing natural disasters and how to prevent them are being ignored. Hell, ignoring scientific consensus is a fantastic political platform.
An hybrid approach could maybe work, have a more or less standard game engine for coherence and use this kind of generative AI more or less as a short term rendering and physics sim engine.
I've thought about this same idea but it probably gets very complicated.
Let's say, you simulate a long museum hallway with some vases in it. Who holds what? The basic game engine has the geometry, but once the player pushes it and moves it, it needs to inform the engine it did, and then to draw the next frame, read from the engine first, update the position in the video feed, then again feed it back to the engine.
What happens if the state diverges. Who wins? If the AI wins then...why have the engine at all?
It is possible but then who controls physics. The engine? or the AI? The AI could have a different understanding of the details of the base. What happens if the vase has water inside? who simulates that? what happens if the AI decides to break the vase? who simulates the AI.
I don't doubt that some sort of scratchpad to keep track of stuff in game would be useful, but I suspect the researchers are expecting the AI to keep track of everything in its own "head" cause that's the most flexible solution.
Then maybe the engine should be less about really simulating the 3D world and just trying best to preserve consistency, more about providing memory and saving context for consistency than truly simulating a lot besides higher level concerns (at which point we might wonder if it couldn't be directly part of the model somehow), but writing those lines I realize there would probably still be many edge cases exactly like what you are describing...
As a kid in the early 1980s, I spent a lot of time experimenting with computers by playing basic games and drawing with crude applications. And it was fun. I would have loved to have something like Google's Genie to play with. Even if it never evolved, the product in the demos looks good enough for people to get value from.
It's been very profitable for drug dealers for centuries, who wouldn't want a piece of that market?
Because games already exist, and it would be easier for LLMs to write games rather than hallucinate videos.
Genie isn't about making games... Granted, they for some reason they don't put this at the top. Classic Google, not communicating well...
The key part is simulation. That's what they are building this for. Ignore everything else.
Same with Nvidia's Earth 2 and Cosmos (and a bit like Isaac). Games or VR environments are not the primary drive, the primary drive is training robots (including non-humanoids, such as Waymo) and just getting the data. It's exactly because of this that perfect physics (or let's be honest, realistic physics[0,1]). Getting 50% of the way there in simulation really does cut down the costs of development, even if we recognize that cost steepens as we approach "there". I really wish they didn't call them "world models" or more specifically didn't shove the word "physics" in there, but hey, is it really marketing if they don't claim a golden goose can not only lay actual gold eggs but also diamonds and that its honks cure cancer?
[0] Looking right does not mean it is right. Maybe it'll match your intuition or undergrad general physics classes with calculus but talk to a real physicist if you doubt me here. Even one with just an undergrad will tell you this physics is unrealistic and any one worth their salt will tell you how unintuitive physics ends up being as you get realistic, even well before approaching quantum. Go talk to the HPC folks and ask them why they need superocmputers... Sorry, physics can't be done from observation alone.
[1] Seriously, I mean look at their demo page. It really is impressive, don't get me wrong, but I can't find a single video that doesn't have major physics problems. That "A high-altitude open world featuring deformable snow terrain." looks like it is simulating Legolas[2], not a real person. The work is impressive, but it isn't anywhere near realistic https://deepmind.google/models/genie/
[2] https://www.youtube.com/watch?v=O4ZYzbKaVyQ
But it's not simulating, is it? It's hallucinating videos with an input channel to guide what the video looks like. Why do that instead of just picking Unreal, Unity, etc and having it actually simulated for a fraction of the effort?
Depends on your definition of simulation but yeah, I think you understand.
I think it really comes down to dev time and adaptability. But honestly I'm fairly with you. I don't think this is a great route. I have a lot of experience in synthetic data generation and nothing beats high quality data. I do think we should develop world models but I wouldn't all something a world model unless it actually models a physics. And I mean "a physics" not "what people think of as 'physics'" (i.e. the real world). I mean having a counterfactual representation of an environment. Our physics equations are an extremely compressed representation of our reality. You can't generate these representations through observation alone, and that is the naive part of the usual way to develop world models. But we'd need to go into metaphysics and that's a long conversation not well suited for HN.
These simulations are helping but they have a clear limit to their utility. I think too many people believe that if you just feed the models enough data it'll learn. Hyperscaleing is a misunderstanding of the Bitter Lesson that slows development despite showing some progress.
Why is it a dead end, you don’t meaningfully explain that. These models look like you can interact with them and they seem to replicate physics models.
They don't though, they're hallucinated videos. They're feeding models tons and tons of 2D videos and hoping they figure out physics from them, instead of just using a game engine and having the LLM write something up that works 100% of the time.
On the flip side, the emergent properties that come from some of these wouldn’t be replicable by an engine. A moss covered rock realistically shedding moss as it rolls down a hill. Condensation aggregating into beads and rivulets on glass. An ant walking on a pitcher plant and being able to walk inside it and see bugs drowned from its previous meal. You’re missing the forest for the trees.
2 replies →
[dead]