← Back to context

Comment by rossnordby

5 years ago

Games that are aiming more for a "cinematic narrative experience" might be perfectly fine with a few 33ms frames of latency, and a total input latency far exceeding 100ms. Competitive twitchy games will tend to be more aggressive. And VR games too, of course.

In principle, you can push GPU pipelines to very low latencies. Continually uploading input and other state asynchronously and rendering from the most recent snapshot (with some interpolation or extrapolation as needed for smoothing out temporal jitter) can get you down to total application-induced latencies below 10ms. Even less with architectures that decouple shading and projection.

Doing this requires leaving the traditional 'CPU figures out what needs to be drawn and submits a bunch of draw calls' model, though. The GPU needs to have everything it needs to determine what to draw on its own. If using the usual graphics pipeline, that would mean all frustum/occlusion culling and draw command generation happens on the GPU, and the CPU simply submits indirect calls that tell the GPU "go draw whatever is in this other buffer that you put together".

This is something I'm working on at the moment, and the one downside is that other games that don't try to clamp down on latency now cause a subtle but continuous mild frustration.