Comment by phailhaus
19 hours ago
> Problem is, that's not what we've observed to happen as these models get better
Eh? Context rot is extremely well known. The longer you let the context grow, the worse LLMs perform. Many coding agents will pre-emptively compact the context or force you to start a new session altogether because of this. For Genie to create a consistent world, it needs to maintain context of everything, forever. No matter how good it gets, there will always be a limit. This is not a problem if you use a game engine and code it up instead.
The models, not the context. When it comes to weights, "quantity has a quality all its own" doesn't even begin to describe what happens.
Once you hit a billion or so parameters, rocks suddenly start to think.
We're talking about context here though. The first couple seconds of Genie are great, but over time it degrades. It will always degrade, because it's hallucinating a world and needs to keep track of too many things.
That has traditionally been the problem with these types of models, but Genie is supposed to maintain coherence up to 60 seconds.
I've tried using it a couple of times, but can't get in. It is either down or hopelessly underprovisioned by Google. Do you have any links to videos showing that the quality degrades after only a few seconds?