← Back to context

Comment by Night_Thastus

4 hours ago

The whole "foveated streaming" sounds absolutely fascinating. If they can actually pull off doing it accurately in real time, that would be incredible. I can't even imagine the technical work behind the scenes to make it all work.

I'd really like to know what the experience is like of using it, both for games and something like video.

There's an awesome shader on shadertoy that illustrates just how extreme the fovea focus is: https://www.shadertoy.com/view/4dsXzM

Linus the shrill/yappy poodle and his channel are less than worthless IMO.

  • When you full screen this, it's crazy how tiny the area that spins is. For me it's like an inch or inch and a half on a 32 inch 4k display at a normal seated position.

    (If I move my head closer it gets larger, further and it gets smaller)

  • Imagine if we could hook this into game rendering as well. Have super high resolution models, textures, shadows, etc near where the player is looking, and use lower LoDs elsewhere.

    It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)

    • As a lover of ray/path tracing I'm obligated to point out: rasterisation gets its efficiency by amortising the cost of per-triangle setup over many pixels. This more or less forces you to do fixed-resolution rendering; it's very efficient at this, which is why even today with hardware RT, rasterisation remains the fastest and most power-efficient way to do visibility processing (under certain conditions). However, this efficiency starts to drop off as soon as you want to do things like stencil reflections, and especially shadow maps, to say nothing of global illumination.

      While there are some recent'ish extensions to do variable-rate shading in rasterisation[0], this isn't variable-rate visibility determination (well, you can do stochastic rasterisation[1], but it's not implemented in hardware), and with ray tracing you can do as fine-grained distribution of rays as you like.

      TL;DR for foveated rendering, ray tracing is the efficiency king, not rasterisation. But don't worry, ray tracing will eventually replace all rasterisation anyway :)

      [0] https://developer.nvidia.com/vrworks/graphics/variableratesh...

      [1] https://research.nvidia.com/sites/default/files/pubs/2010-06...

    • Foveated rendering is already a thing. But since it needs to be coded for in the game, it's not really being used on PC games. Games designed for Playstation with the PS VR 2 in mind do use foveated rendering since they know their games are being played with hardware that provides eye tracking.

    • Game rendering is what they're talking about here. John Carmack has talked about this a bunch if you'd like to seed a google search.

If you use a Quest Pro and use Steam Link with a WiFi 6E access point, that should accurately represent the experience of using it.

It's close to imperceptible in normal usage.

Foveated streaming should be much easier to implement than foveated rendering. Just encode two streams, a low res one and a high res one, and move the high res one around.

I'm super curious how they will implement it, if it's a general api in steam vr that headsets like the Bigscreen Beyond could use or if it's more tailored towards the Frame. I hope it's the first as to me it sounds like all you need is eye input and the two streams, the rest could be done by steam-vr.

There is a LTT video: https://www.youtube.com/watch?v=dU3ru09HTng

Linus says he cannot tell it is actually foveated streaming.