Comment by Night_Thastus

3 months ago

The whole "foveated streaming" sounds absolutely fascinating. If they can actually pull off doing it accurately in real time, that would be incredible. I can't even imagine the technical work behind the scenes to make it all work.

I'd really like to know what the experience is like of using it, both for games and something like video.

There's an awesome shader on shadertoy that illustrates just how extreme the fovea focus is: https://www.shadertoy.com/view/4dsXzM

Linus the shrill/yappy poodle and his channel are less than worthless IMO.

  • When you full screen this, it's crazy how tiny the area that spins is. For me it's like an inch or inch and a half on a 32 inch 4k display at a normal seated position.

    (If I move my head closer it gets larger, further and it gets smaller)

  • That's crazy. I feel dumb for initial thinking it was somehow doing eye tracking to achieve this, despite having no such hardware installed.

    I would be curious to see a similar thing that includes flashing. Anecdotally, my peripheral vision seems to be highly sensitive to flashing/strobing even if it is evidently poor at seeing fine details. Make me think compression in the time domain (e.g. reducing frame rate) will be less effective. But I wonder if the flashing would "wake up" the peripheral vision to changes it can't normally detect.

    Not sure what the random jab at Linus is about.

    • It’s normal to be "more sensitive" to brightness differences in the peripheral areas compared to the fovea. The fovea has more color receptors, in the other areas, there are comparatively more monochromatic receptors (brightness). The general density of the fovea is also much larger.

  • Imagine if we could hook this into game rendering as well. Have super high resolution models, textures, shadows, etc near where the player is looking, and use lower LoDs elsewhere.

    It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)

    • Foveated rendering is already a thing. But since it needs to be coded for in the game, it's not really being used on PC games. Games designed for Playstation with the PS VR 2 in mind do use foveated rendering since they know their games are being played with hardware that provides eye tracking.

    • That's foveated rendering. Foveated streaming, which is newly presented here, is a more general approach which can apply to any video signal, be it from a game, movie or desktop environment.

      They are complementary things. Foveated rendering means your GPU has to do less work which means higher frame rates for the same resolution/quality settings. Foveated streaming is more about just being able get video data across from the rendering device to the headset. You need both things to get great results as either rendering or video transport could be a bottleneck.

    • Game rendering is what they're talking about here. John Carmack has talked about this a bunch if you'd like to seed a google search.

      3 replies →

    • As a lover of ray/path tracing I'm obligated to point out: rasterisation gets its efficiency by amortising the cost of per-triangle setup over many pixels. This more or less forces you to do fixed-resolution rendering; it's very efficient at this, which is why even today with hardware RT, rasterisation remains the fastest and most power-efficient way to do visibility processing (under certain conditions). However, this efficiency starts to drop off as soon as you want to do things like stencil reflections, and especially shadow maps, to say nothing of global illumination.

      While there are some recent'ish extensions to do variable-rate shading in rasterisation[0], this isn't variable-rate visibility determination (well, you can do stochastic rasterisation[1], but it's not implemented in hardware), and with ray tracing you can do as fine-grained distribution of rays as you like.

      TL;DR for foveated rendering, ray tracing is the efficiency king, not rasterisation. But don't worry, ray tracing will eventually replace all rasterisation anyway :)

      [0] https://developer.nvidia.com/vrworks/graphics/variableratesh...

      [1] https://research.nvidia.com/sites/default/files/pubs/2010-06...

      2 replies →

Foveated streaming should be much easier to implement than foveated rendering. Just encode two streams, a low res one and a high res one, and move the high res one around.

There is a LTT video: https://www.youtube.com/watch?v=dU3ru09HTng

Linus says he cannot tell it is actually foveated streaming.

I'm super curious how they will implement it, if it's a general api in steam vr that headsets like the Bigscreen Beyond could use or if it's more tailored towards the Frame. I hope it's the first as to me it sounds like all you need is eye input and the two streams, the rest could be done by steam-vr.

If you use a Quest Pro and use Steam Link with a WiFi 6E access point, that should accurately represent the experience of using it.

It's close to imperceptible in normal usage.