← Back to context

Comment by LarsDu88

3 days ago

Every generation thinks the current generation of graphics won't be topped, but I think you have no idea what putting realtime generative models into the rendering pipeline will do for realism. We will finally get rid of the uncanny valley effect with facial rendering, and the results will almost certainly be mindblowing.

Every generation also thinks that the uncanny valley will be conquered in the next generation ;)

The quest for graphical realism in games has been running against a diminishing-returns-wall for quite a while now (see hardware raytracing - all that effort for slightly better reflections and shadows, yay?), what we need most right now is more risk-taking in gameplay by big budget games.

I think the inevitable near future is that games are not just upscaled by AI, but they are entirely AI generated in realtime. I’m not technical enough to know what this means for future console requirements, but I imagine if they just have to run the generative model, it’s… less intense than how current games are rendered for equivalent results.

  • I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm

    • I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.

  • Even if you could generate real-time 4K 120hz gameplay that reacts to a player's input and the hardware doesn't cost a fortune, you would still need to deal with all the shortcomings of LLMs: hallucinations, limited context/history, prompt injection, no real grasp of logic / space / whatever the game is about.

    Maybe if there's a fundamental leap in AI. It's still undecided if larger datasets and larger models will make these problems go away.

    • I actually think many of these are non-issues if devs take the most likely approach which is simply doing a hybrid approach.

      You only need to apply generative AI to game assets that do not do well with the traditional triangle rasterization approach. Static objects are already at practically photorealistic level in Unreal Engine 5. You just need to apply enhancement techniques to things like faces. Using the traditionally rendered face as a prior for the generation would prevent hallucinations.

  • Realtime AI generated video games do exist, and they're as... "interesting" as you might think. Search YouTube for AI Minecraft

  • Good luck trying to tell a "cinematic story" with that approach, or even trying to prevent the player from getting stuck and not being able to finish the game, or even just to reproduce and fix problems, or even just to get consistent result when the player turns the head and then turns it back etc etc ;)

    There's a reason why such "build your own story" games like Dwarf Fortress are fairly niche.