← Back to context

Comment by jsheard

20 hours ago

Isn't this still essentially "vibe simulation" inferred from videos? Surface-level visual realism is one thing, but expecting it to figure out the exact physical mechanics of sailing just by watching boats, and usefully abstract that into a gamified form, is another thing entirely.

Yeah I have a whole lot of trouble imagining this replacing traditional video games any time soon; we have actually very good and performant representations of how physics work, and games are tuned for the player to have an enjoyable experience.

There's obviously something insanely impressive about these google experiments, and it certainly feels like there's some kind of use case for them somewhere, but I'm not sure exactly where they fit in.

Why wouldn't it just hook it into something like physx?

  • Google has made it clear that Genie doesn't maintain an explicit 3D scene representation, so I don't think hooking in "assists" like that is on the table. Even if it were, the AI layer would still have to infer things like object weight, density, friction and linkages correctly. Garbage in, garbage out.

    • Google could build try to build an actual 3d scene with ai using meshes or metaballs or something. That would allow for more persistance, but I expect makes the ai more brittle and limited, and, because it doesn't really understand the rules for the 3d meshes it created, it doesn't know how to interact with them. It can only be fluffy-mushy dream images.