Comment by still_grokking
3 years ago
Cynic comment ahead, beware!
---
Does this actually even matter today when every click or key-press triggers dozens of fat network request going around the globe on top of a maximally inefficient protocol?
Or to summarize what we see here: We've build layers of madness. Now we have just to deal with the fallout…
The result is in no way surprising given we haven't refactored our systems for over 50 years and just put new things on top.
If you aren't familiar, check out Winning Run [1]. A 3D arcade racing game from 1988, about the best possible with custom hardware at the time. Graphics quality is primitive by modern standards. But make sure to watch the video in 60 fps. If there's any hiccups, it's your device playing the video. Smooth and continuous 60 frames per second rendering, with some tens of millisecond delay to respond to game inputs. It's still very hard to pull that off today, yet it's fundamental to that type of game's overall quality.
[1] https://youtu.be/NBiD-v-YGIA?t=85
WipEout HD on the PS3 managed to get super stable 60FPS at 1080p. It dynamically scales the horizontal rendering resolution for every frame and then scales it to 1920 pixels using hardware. So the resolution might vary a bit but at that framerate and such speeds in races it's not noticeable. The controls were super smooth at any speed, only achievement popups caused the whole game to freeze for half a second.
RPCS3 emulator can play it at 120Hz, I recommend.
I guess keyboard latency is also the biggest problem if you play old games in emulators. I feel is often very difficult to play old action games, because you can't hit the buttons exactly enough.
I'm using a Razer Huntsman V2 keyboard, which has 8kHz polling and optical switches. I do not notice any obvious latency from it, and the specification claims sub-millisecond latency from switch activation point. This is better performance than is possible from a PS/2 keyboard, because the PS/2 interface is bottlenecked by the slow serial link.
1 reply →
This video is not a steady 60FPS. Lots of frames are duplicated or torn. Maybe this was originally 60FPS and got mangled by the recording process.
Yes. Upon close re-watching, I notice several sections of relative frame-drop. Still, there's long sections of low-complexity animation that are > 30 fps. My point was that, for certain interactive tasks that are governed by human visual response time, there is a hard limit of maybe 50 ms. Such a period was just barely adequate to render enough for a convincing 3D animation ~40 years ago. But even today, only so much can be done in that much time.
For completely imperceptible computing, display refresh must be dealt within 50 ms, roughly. Input must be sampled, all relevant computations for the current rendered display must be computed, or in easily accessed storage, and all updates to the display propagated to the framebuffer. Sensitive humans can notice as little as roughly 50 ms of lag or jitter.
This means for a program dealing with highly interactive graphics that are linked to an input device, the core event loop must execute in less than 50 ms or so. Even with current blazing-fast machines, this is a tough challenge for anything complex. If this deadline is not met, potentially perceptible lag in rendering will occur. In a 3D rendered scene, the graphics may perceptibly hang or tear for a couple frames. This is perceptible by a human, though with sustained suspension of disbelief, we can mostly ignore it, much as we can ignore the various visual artifacts in 24 fps cinema.
That inefficient network has better latency than your computer when trying to show you a pixel: <http://newstmobilephone.blogspot.com/2012/05/john-carmack-ex...>
Only that such a network call can't replace the pixel output.
It just adds up to the overall latency.
Also real latency of web-pages is measured in seconds these days. People are happy when they're able to serve a request in under 0.2 sec.
Fifteen years ago I used to target 15ms as seen in the browser F12 network trace (not as recorded on the server!) and if I mention such a thing these days people are flabbergasted.
For example, I had a support call with Azure asking them why the latency between Azure App Service and Azure SQL was as high as 13ms, and they asked me if my target user base was "high frequency traders" or somesuch.
They just could not believe that I was expecting sub-1ms latencies as a normal thing for a database response.
5 replies →