← Back to context

Comment by seedie

1 day ago

Imo they explain pretty well what they are trying to achieve with SIMA and Genie in the Google Deepmind Podcast[1]. They see it as the way to get to AGI by letting AI agents learn for themselves in simulated worlds. Kind of like how they let AlphaGo train for Go in an enormous amount of simulated games.

[1] https://youtu.be/n5x6yXDj0uo

That makes even less sense, because an AI agent cannot learn effectively from a hallucinated world without internal consistency guarantees. It's an even stronger case for leveraging standard game engines instead.

"I need to go to the kitchen, but the door is closed. Easy. I'll turn around and wait for 60 seconds." -AI agent trained in this kind of world

If that's the goal, the technology for how these agents "learn" would be the most interesting one, even more than the demos in the link.

LLMs can barely remember the coding style I keep asking it to stick to despite numerous prompts, stuffing that guideline into my (whatever is the newest flavour of product-specific markdown file). They keep expanding the context window to work around that problem.

If they have something for long-term learning and growth that can help AI agents, they should be leveraging it for competitive advantage.