It's worth noting that it's still possible to achieve low latency with modern hardware and even software to some extent, even with all the input filtering on touchscreens. The modern GPU rendering stack won't be able to do it, but I was able to do it in software without giving up the Linux kernel, IRQs, or the input stack.
I have to say that writing on phones really feels much more interactive and natural when the latency is so low. I don't personally feel like these ranges of latency are that big of a deal on desktop computers where input is indirect, but it feels much more significant on phones where your interactions are right underneath your finger.
If only they cared about such things for Terminal.app, so we didn't have the extra step of installing Alacritty on every single machine to improve the situation (and not to anywhere near 9ms, even).
There was an iOS 9 glitch that disabled all animations, and it was so satisfying to open apps and have them show up immediately. Made the phone feel super fast
There’s still an option in accessibility to “Reduce Motion”. It does feel slightly quicker. But most animations are changed by a less visually impactful fade
I'm sure there's then some way to achieve front-buffer rendering in a software rendered pipeline as well to cut out the GPU latency & avoid tile-based artifacts.
They both have much higher tap latency due to the Android graphics stack. The difference in drag/scroll latency feels even higher, but I don't know how to measure that quantitatively without special equipment.
Example of ROG Phone II results:
Kernel module, display running at 60 Hz: 10 ms
Android, 120 Hz display: 32 ms
Flutter, 120 Hz display: 30 ms
They're only this close because Android gets 2x the refresh rate here, but it's still 3x slower.
Anyone that has used a 120Hz+ display will immediately notice that everything feels more responsive even with simple things like moving your mouse and scrolling.
I also find the responsiveness noticeable when typing. You'll be able to tell that the text appears faster on the screen.
I wouldn't say it's a requirement. But it does make the user experience nicer.
Now if only someone made a 34” ultra wide with 120Hz+. In particular one with 5120 x 2160 resolution. Maybe in 2022 at the rate high hertz monitors are coming for non-gaming uses...
I agree. I'd rather hi-dpi 32"+ monitors first though. Hidpi displays are becoming standard on top-end laptops but on desktop you're almost out of luck.
It's wild that the only hidpi 32" monitor is ~$5000, and it was released a few years ago now.
I'm looking for something similar, but not ultrawide. There's plenty of 27" 4k 100Hz+ monitors, but not the same in 30-36" sizes. I have 3x 24" monitors, and would like to replace one or two of them with one bigger screen that's higher resolution and faster.
My current dream monitor is a 42inch 3840x2400 or 7680x4800 - 120hz screen.
I love the 16:10 aspect ratio and real estate of a monitor like that. I’m currently running 3 24inch monitors in portrait mode, so 3600x1920
I'm waiting for the same thing. Hoping LG actually has this in the works, as their current 34" 5120 x 2160 has been out of stock for a while now, and they make a bunch of other high refresh rate displays.
The LG 38GN950-B [0] gets close. 3840x1600 so not quite your wanted resolution but close - enough to fit most 4K movies if you remove the black bars. Personally I am hoping someone will make an OLED with this format.
I doubt you'll find any gaming monitors at that resolution for a while. The newest Nvidia GPUs can't hit high enough frame rates at 4k to utilize 120Hz.
I bought a 165Hz gsync screen years ago for gaming.
I bought all of my other high-refresh screens because wow computing feels so much better in the day-to-day desktop because of it! Not joking in the least.
But the text isn't making it from your keyboard through the machine and to the monitor any faster. Seems like at 120hz you're just maybe going to catch it a frame earlier when it shows up.
It is, because there may be multiple things synchronizing their inputs and outputs to the refresh, which causes the refresh-related latency to be a number of frames. E.g. in some Linux compositors inputs are latched with the refresh, apps themselves may render in sync, and the compositor also draws in sync; the GPU driver would also introduce at least a frame of latency typically.
Another factor is that some (many?) 60 Hz displays buffer a whole frame themselves, and often don't have quick response times. If you go from a 10 ms response time IPS screen with a frame buffer to a 120 Hz gaming screen with 2-3 ms response time, you already got a difference of about 25 ms just in the screen itself.
8 ms is hard to notice. 50 ms less so.
The difference is pretty huge, even on systems that are much better tuned than Linux desktops (e.g. Windows 10).
That being said, while it is very nice and feels nice, it's not necessary for development work; I spend most of my days developing on a system over a VNC connection through a VPN, so the basic input lag of that setup is around 200-300 ms. Gnarly yes, but not particularly bad for text input. You get used to just do everything very slowly with the mouse.
In the article:
> We get a 90 ms improvement from going from 24 Hz to 165 Hz.
As per the linked article, the observed delay improvement isn't one frame, more like ~3 frames. It doesn't go up from 1/24 to 1/165, it goes from 2.5/24 to 2.5/165.
Computer software waits a lot more than it did in 1977. That's why 240hz displays feel much more snappier even if it's supposed to be less noticeable -- you're waiting for same 3 frames, but they pass by much faster.
And on the other side: I very much notice the difference between a corded mouse and a Bluetooth mouse (not a cheap one). It's not unusable, but frustrating enough that I prefer the corded ones. And I'm not a gamer, just doing office stuff, browsing and programming.
I have tried like 5 Bluetooth mice, since BL4.0 the latency has improved from 'terrible' to 'tolerable'. It might be something to do with the windows bluetooth stack, as i noticed the respobce was more sluggish during high CPU use, while USB mice seem unaffected.
The same logitech mouse performs much faster through their universal wireless adapter than trhough bluetooth (it has 2 modes), they also have lightspeed adapter, but I haven't noticed much difference.
If you haven't already, get a high refresh rate monitor and a 1000 reports/sec "gaming" mouse. The Razer Viper Mini is light, which adds to the feeling of responsiveness. I've bought a couple for other people and they love them.
I recently got a 165 Hz monitor for my decade-old PC (Sandy Bridge era) and with my (now old) G302 mouse, it's like having a new, much faster PC.
Maybe there is something there about IDE hints and whatnot too rendering faster, but those are usually bottlenecked by some async background work (language servers, linters, etc.) anyway which would negate the benefit of the screen refresh rate being faster
This is also can be enhanced by tweaking up your mouse sample rates and whatnot. I used to do a lot of setting PS/2 mouse sample rates at 80Hz or more in late 90s that made Windows machines feel almost as nice to use as a Mac.
Yeah, this is my favorite gimmick of my Samsung Galaxy S20. 120Hz display+240Hz touch sensor makes everything feels so fast and responsive, this is the first Android device that feels faster than my iPad.
Some recent PC/laptop displays can do 100hz and I even feel a difference between that and the long-standard 60hz. It's a small effect but it's noticeable and it looks a bit smoother and better.
You may not notice, that doesn't mean nobody does. Also it can depend on your operating system. Windows 10 (maybe everything now?) forces you to be in vsync, so increasing refresh rate has a non trivial improvement on the # of ms for a keypress response.
Now whether you will notice that can depend what environment you are in. If your editor already has an input latency of 100ms then shaving 8 off probably is not noticeable. But going from 20 to 12 might be.
the rate of persistence of the human eye is around 25Hz, hence PAL and NTSC framerates. the upper bound for those hypersensitive is around 50Hz, hence the later generation monitors. anything above this is nothing more than marketing hype. it seeming smoother is merely a placebo to justify the extra cost
"The human eye can't tell the difference past 30 FPS" was literally just a thought-killing cliche repeated by console gamers getting into internet slapfights with PC gamers.
This is not correct. Let's not dispute and go with your claim of 25Hz.
Even then, the problem is that eyes don't work like cameras or monitors. Our eyes don't work with "frames". Each receptor updates on its own time. It's easy to see that, if the updates are staggered, multiple sensors could perceive higher frame-rates, even if they can't individually.
However, there's another angle to this. Disregarding input lag, which is a very real phenomenon and is greatly shortened by higher refresh, higher refresh rate monitors are able to show more 'discrete' steps for anything that's in motion. Our eyes (and brains) perceive this as movement 'smoothness', even if they can't quite make out every single frame that's displayed.
This is very incorrect. With video, every person can tell the difference between 30hz and 60hz, if they at least know what to look for.
The easiest way to show this is by wiggling your mouse around quickly on a computer screen.
At 30hz - you'll see the mouse teleporting around - hopping from spot to spot, but not moving. For example, if you stare in the middle, and jerk the mouse quickly to the right, you'll see the 4 or 5 spots where it rendered.
With 60hz, you'll see the 9 or 10 spots - and have a stronger illusion of movement.
With 120hz, it might even look as smooth as a real object flying across your screen.
So does it seem smoother or does it not? I definitely notice the difference with 60fps video and 120Hz+ monitors.
Additionally, another thing I've noticed in the last decade is the very badly PWM-frequency-tuned LED headlights on some cars. Those engineers selected the wrong LED brightness and tuned it to terrible frequencies which leave flicker-trails when you look at or away from them.
Get those PWM frequencies above 500Hz please! Especially get above 200Hz with your LED frequencies at the very least.
~25Hz is the _lower bound_ for video* to appear smooth, _as long as there is motion blur_. But user interfaces do not add motion blur to moving objects. A computer monitor at 25Hz would be horrendous to use.
Lower framerates are less noticeable in low light, which is another reason why films look acceptable.
*Talented animators/cartoonists can get away with lower framerates
I love the new Windows Terminal (wt.exe) but this is one area where they really messed up. The latency between a keypress and a character appearing on screen makes the whole thing feel like a cheap experience. I have no idea what's causing the latency.
I suspect there some very big fancy rendering pipeline occurring, because when I open Windows Terminal I get the nVidia overlay popup which normally only comes up when I launch games, indicating the terminal is using a GPU-based rendering engine. Which I'm sure confers some interesting benefits, but it's a heavy price to pay when, at the end of the day, it's just a terminal.
I believe they do made a post about how conhost(the actual backend when you run cmd.exe from start) have such lower input latency compared to other windows terminals.
The cmd need to do a lot of things (font rendering, layout whatever) without help of other services (It need to work even without them, see the recovery mode). So it effectively bypass a lot of interactions with other services a normal program need to do in order to put the text on screen. It is basically the tty? in Windows.
But that also means it need to sacrifice a lot of things because that aren't available in recovery mode. You get no fancy emoji supports or whateve feature you would expect in a modern terminal. And the text rendering looks very bad until recently the remake the conhost.
Modern game engines buffer 3 or 4 frames, sometimes 5. Not unusual to have 140ms latency on 60hz screen between clicking mouse1 and seeing the muzzle flash.
* deferred vs forward rendering (deferred adds latency)
* multithreaded vs singlethreaded
* vsync (double buffering)
It's far cry from simplified pure math people think of when they think of fps in games or refresh rate for office and typing. Software is very very lazy lately, and most of the time these issues are being fixed by throwing more hardware at it, not fixing the code.
Games that are aiming more for a "cinematic narrative experience" might be perfectly fine with a few 33ms frames of latency, and a total input latency far exceeding 100ms. Competitive twitchy games will tend to be more aggressive. And VR games too, of course.
In principle, you can push GPU pipelines to very low latencies. Continually uploading input and other state asynchronously and rendering from the most recent snapshot (with some interpolation or extrapolation as needed for smoothing out temporal jitter) can get you down to total application-induced latencies below 10ms. Even less with architectures that decouple shading and projection.
Doing this requires leaving the traditional 'CPU figures out what needs to be drawn and submits a bunch of draw calls' model, though. The GPU needs to have everything it needs to determine what to draw on its own. If using the usual graphics pipeline, that would mean all frustum/occlusion culling and draw command generation happens on the GPU, and the CPU simply submits indirect calls that tell the GPU "go draw whatever is in this other buffer that you put together".
This is something I'm working on at the moment, and the one downside is that other games that don't try to clamp down on latency now cause a subtle but continuous mild frustration.
Yeah, I'm far from an expert on rendering and latency, but presumably game developers put a ton of effort into ensuring that the pixels are pushed with as little input latency as possible. This may not have been a priority for Microsoft in their terminal.
For games, consistent, smooth frame rates and vsync(no tearing) is more important than input lag so often times things will be buffered.
That said, the VR space has a much tighter tolerance on input lag and there's hardware based mitigations. Oculus has a lot of techniques such as "Asynchronous Spacewarp" which will calculate intermediate frames based on head movement(an input) and movement vectors storing the velocity of each pixel. They also have APIs to mark layers as head locked or free motion etc.
With lockdown keeping me at home, and having to RDP to work over a VPN, I've gone quite far down the rabbit hole this year experimenting with low latency keyboards (1000Hz USB polling, configurable debounce times), gaming mice and high refresh rate monitors (125Hz+), all to try and make my remote experience feel more local.
The threshold for me (and most people I think) is about 110ms between key strike and something appearing on the screen before something feels non-local and disembodied... which is good because I found I could get down to ~40ms local response time on a decent windows box. My 2015 macbook is about ~120ms for the terminal app.
That leaves 70ms latency budget to play with which can accommodate the the RDP overhead (~50ms) and the network hop (~10ms for me).
This whole journey started with me wondering if I should pay a small fortune for a better network connection, before I realised that most of the latency was in my keyboard and monitor!
I even built a Arduino Leonardo gadget to measure the lag systematically and some pcap software to spot the RDP packets on the wire. The code's pretty shoddy but it does the trick! You can see it and an example result with RDP here:
Nice, I didn't think that but it makes sense that network hop isn't so slow now. I am surprised RDP to work isn't as slow as I feared, the shared server they give us is a different problem. :)
I have a project right now where I have to connect to a VPN, then remote-desktop to a "jumper PC" that's dual-homed via VNC to a segregated PC running Windows XP on a manufacturing tool we recently got out of mothball to backfill Covid supply chain gaps. On the plus side - the ancient version of Visual Studio on that tool is surprisingly snappy!
All my coworkers remote desktop into their workstations at the office from a laptop at home to do dev work. I can't do it. vscode remote development extension has been my savior when I need to code locally and then flip over to RDP to run the deployment from a specific environment (the office)
Our development environments have been azure VMs for ages.
Latency is not really an issue to be honest. I think ping is about 15ms which is much less than the lag in this article. Right now I'm using the horrible, 2012-era work laptop with just Chrome running and there are delays typing this comment into HN which is a super lean site as it is.
The main downsides are actually
* crap single-threaded performance (don't think Azure has VMs over 3ghz except maybe the highly expensive GPU ones)
* Slow IO
* Video and audio over RDP really doesn't work.. no hardware graphics accel either... reduced ability to work on multimedia, we had one webapp where WebGL wound up behaving differently which we didnt
Citrix has made a whole thing out of speeding up this scenario. It's a big part of why customers still use their software despite their other numerous failings.
This reminds me of graphing calculators in the late 90s when I was at school. I initially had a HP, but it took what felt like 500ms to respond when I pressed a key. I switched to a TI-83 and it was chalk and cheese - instant response; you could even write programs in assembly. There was a small cottage industry of traded programs for it.
HP's early graphing calculators definitely had very high latency. But they dealt with it properly, which PC's don't. An HP-48 series calculator will buffer dozens of keypresses, and handle them in-order as if the UI had been instantly responsive. So you don't actually have to wait for a menu or dialog box to be drawn to the screen—if you've learned the necessary input sequence to perform the task you want, you can enter it at full speed. The keyboards were also very high quality with great tactile feedback, so you didn't need visual feedback on the screen to know your key press had been registered.
I'd pile on that the programmability of the HPs was amazing compared to the TIs. The manual for my HP-48sx was a chonkyboi but man could I do so much with it. Once I figured out how to calculate pitch frequencies for equal temperament (just a mathematical series) I was using the BEEP function (IIRC) to have my calculator play music.
I'd describe hp and ti as laggy and slick, respectively. The 50g is what I used in university, but now I seldom touch it as the 86/89 are so much more pleasant to use.
Modern software is such garbage. We could make slack feel as good as counterstrike WRT input latency if we really cared hard enough about the end user experience. Unfortunately, chasing shiny (laggy) technologies is way more popular and marketable.
Electron is the new Flash. Bloat and lag. Using it is understandable as a tactical choice when it's expensive to maintain separate native apps everywhere but man, Slack is a 25 billion dollar company - surely they can write some native apps?
Same deal or worse with Teams, the performance of which is goddamn execrable. My laptop is an awful 2012 thing but it shouldn't take 5 seconds to switch between chats.
I haven't used it with slack but it works just fine with discord. You don't get voice noise cancellation with ripcord but you don't need that if you use rtxvoice anyway.
Yes! Slack, Teams, Discord, Spotify. All of these are horrible. I hate to use them. Good thing is I only have to use one of them.
I wish there was more modern low latency software.
Please please please point me to instructions on how to build a low latency PC.
I crave some good old fashioned responsive computing, but moreover I want to fill my classroom with these devices and show a new generation that, to borrow a catchphrase from a far more noble cause, it gets better.
Regarding monitors, OLEDs have a much faster response compared to LCDs. See this [1] for an example. As of why keyboard scanning has become so abysmally slow, it's beyond me. You'd think that this would be a solved problem even for cheap wired keyboards.
Sure, they are less portable and difficult to manufacture/repair. But their combination of refresh rate (without blur), image quality, and color reproduction are still second to none.
Considering the price of an nVidia 3080, which is standard PC gaming equipment, there must be a market for high end screens for gamers.
Until relatively recently I'd have agreed but at this point I'd take a modern display over a CRT assuming we're talking about high end. Popular options in the high end are displays like the LG 48 CX or some of the "G-Sync Ultimate" monitors like the Predator x27. These kinds of displays start at prices higher than a 3080 though so you don't hear about the market as much since people usually expect to spend less on a monitor than a GPU and the 3080 is generally consider the top of most high end price ranges.
> But their combination of refresh rate (without blur), image quality, and color reproduction are still second to none.
Not true. A current generation OLED (which will do 120hz) is superior to a CRT.
Even still, on many of those a good LCD has long been better than a good CRT. Color accuracy, for example. CRTs required constant fiddling & calibration to stay good at that, whereas a quality factory calibrated LCD will be much better in an "out of the box" or typical usage. Not really sure what you're calling "image quality" but it's hard to imagine that going any way but solidly in a modern high-end LCDs camp, which are brighter, wider color gamuts, higher resolution, and higher pixel densities than CRTs ever had.
> there must be a market for high end screens for gamers.
There is, which is why we have high resolution, high refresh rate, and adaptive vsync IPS monitors a plenty now. They come in all sorts of shapes & sizes, in resolutions & dimensions CRTs could only dream about.
At least I know I'm not wearing nostalgia goggles when I remember my early days on my Windows 95, and how fast the input responses were. It definitely feels as if input latency has degraded.
I’ve recently upgraded from a gtx550ti (~10 y.o.) card to an RX 570 (3 y.o.), and even though I get way more performance, it introduced a noticeable inputlag in csgo.
It just doesnt feel right, and slowmo video capture confirms it.
It would be interesting to measure input latency on the new M1 Macs and add them to the table. It certainly feels like they’ve managed to claw a bit back.
...which is, like the rest of the DOS-based Windows from Windows/386 onwards, itself a sort of hypervisor architecture --- DOS applications effectively run in VMs thanks to V86 mode, giving native performance. The difference is very noticeable if you open something like MS-DOS EDIT on Win9x vs. NT 4/2K/XP --- the latter runs in an emulated environment and the latency is enormous.
The Windows setup is convoluted, awkward, and weird sounding compared to the native app. They are using it for a reason though: it’s much much more responsive than native Linux LibreOffice.
One thing I haven't seen is how does the M1 fare in terms of input latency (e.g. in text editors, like in those old tests done with vim, sublime, intellij, etc: https://pavelfatin.com/typing-with-pleasure/).
> Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen.
The Blackberry q10 runs Blackberry OS 10, which is based on QNX, which has a realtime microkernel.
I'm surprised the Apple 2e's is so high, given the seemingly simplistic nature of the machine. I'm also surprised the TI-99/4A's isn't higher, given how complex it is - with its BASIC interpreter implemented on top of a virtual machine.
Probably just the nature of the sort of chips you could buy at the time and their scan rates and debounce circuitry. As time marches on we have added a lot of abstraction into the mix. One thing I used to do to make old DOS computers 'feel faster' was to turn up the keyboard rates in the BIOS.
TI's throughput certainly was slower. It's been a few decades, but I'm pretty sure it was possible to type faster than the interpreter could keep up. (As a point of reference, listing a program was unbearably slow on the TI, and unreadably fast on the //e.)
I suspect it's just that input processing is so cheap on those early architectures, that latency is really dominated by hardware constraints and not by CPU (however slow it might be). Certainly, on the //e, I'm pretty sure the input routine did little more than copy the character to the input buffer and the screen buffer and advance the cursor. The TI's probably was not much more complicated, despite being implemented in bytecode.
Your assumption is very optimistic for a number of people: a generation of security policy wanted everything routed through central monitoring points so you get home network -> vpn -> corporate data center -> remote server and back, all in the hope that the monitoring system would see malicious traffic (since attackers know how to use encryption & obfuscation, this is usually easily bypassed).
In the appendix they mention something interesting - that apples CPU performance advantage isn’t magic, but very careful planning and testing on the right things, namely the entire App Store’s contents. That’s a huge dataset of real world applications you can test your designs against.
This is interesting but perhaps shows that keyboard/screen latency isn't as important as it once was.
Just about every application on Apple 2e relied on keypresses being shown on the screen. Modern devices and operating systems are displaying 'windows', streaming 3d graphics etc. and running a myriad of other services at the same time.
Sure, the computer is doing a lot more. Apps are still responding to key presses or mouse clicks though. Fortnite, AutoCad or Excel: I press a button I want to see the effect. Increasing delay strictly reduces your ability to control the system via feedback.
Even in a mostly passive app like a video player, it sucks if the play/pause button has a long-delayed effect. The tendency is to press the button over and over and feel correctly that we have lost control of the app.
Maybe it's a self-fulfilling prophecy kind of thing, do we have high speed games today as we used to, e.g. [1]? If game developers assume shitty hardware and software, they won't make things that require better latency.
It's worth noting that it's still possible to achieve low latency with modern hardware and even software to some extent, even with all the input filtering on touchscreens. The modern GPU rendering stack won't be able to do it, but I was able to do it in software without giving up the Linux kernel, IRQs, or the input stack.
My experiment got down to ~10 ms average tap latency on a 60 Hz phone with a touchscreen scanning at 240 Hz: https://twitter.com/kdrag0n/status/1291213993219039232
Source code: https://github.com/kdrag0n/touchpaint
I have to say that writing on phones really feels much more interactive and natural when the latency is so low. I don't personally feel like these ranges of latency are that big of a deal on desktop computers where input is indirect, but it feels much more significant on phones where your interactions are right underneath your finger.
The newest Apple Pencil is apparently around 9ms [1]. It really does fool your brain into thinking something is coming out of the tip.
[1] https://www.macworld.com/article/3402336/apple-pencil-change...
If only they cared about such things for Terminal.app, so we didn't have the extra step of installing Alacritty on every single machine to improve the situation (and not to anywhere near 9ms, even).
16 replies →
There was an iOS 9 glitch that disabled all animations, and it was so satisfying to open apps and have them show up immediately. Made the phone feel super fast
There’s still an option in accessibility to “Reduce Motion”. It does feel slightly quicker. But most animations are changed by a less visually impactful fade
This is an option in developer settings for androids, it does feel very nice
3 replies →
You don't need to jump through all the hoops you did. Many Android devices, high end ones anyway, support front buffer rendering via public APIs, such as through this extension: https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_...
Or if Vulkan is more your jam, doable with: https://www.khronos.org/registry/vulkan/specs/1.2-extensions...
And then "no-latency" input events can be done via View#requestUnbufferedDispatch ( https://developer.android.com/reference/android/view/View?hl... )
I'm sure there's then some way to achieve front-buffer rendering in a software rendered pipeline as well to cut out the GPU latency & avoid tile-based artifacts.
Interesting, I might give those rendering APIs a shot. Thanks for the tips.
I did try requestUnbufferedDispatch though, and it didn't make a noticeable visual difference, but I need to measure it to make sure.
How do your flutter and Android apps compare by the way?
They both have much higher tap latency due to the Android graphics stack. The difference in drag/scroll latency feels even higher, but I don't know how to measure that quantitatively without special equipment.
Example of ROG Phone II results:
Kernel module, display running at 60 Hz: 10 ms
Android, 120 Hz display: 32 ms
Flutter, 120 Hz display: 30 ms
They're only this close because Android gets 2x the refresh rate here, but it's still 3x slower.
Running list of latency tests on many different phones: https://docs.google.com/spreadsheets/d/1mahGpTKZLgKpaBDvcNR7...
2 replies →
Anyone that has used a 120Hz+ display will immediately notice that everything feels more responsive even with simple things like moving your mouse and scrolling.
I also find the responsiveness noticeable when typing. You'll be able to tell that the text appears faster on the screen.
I wouldn't say it's a requirement. But it does make the user experience nicer.
Now if only someone made a 34” ultra wide with 120Hz+. In particular one with 5120 x 2160 resolution. Maybe in 2022 at the rate high hertz monitors are coming for non-gaming uses...
I agree. I'd rather hi-dpi 32"+ monitors first though. Hidpi displays are becoming standard on top-end laptops but on desktop you're almost out of luck.
It's wild that the only hidpi 32" monitor is ~$5000, and it was released a few years ago now.
My eyeballs need that sweet crisp text.
39 replies →
Monitors like LG27GL850 are 1440p 144hz with great color repro. Not ultra-wide, but ultra-wide is not something I personally value
3 replies →
There are plenty of ultra-wide monitors with greater than 120Hz refresh. Just not with the resolution you are asking for.
Or at the size you are looking for.
Samsung 49-Inch CHG90 144Hz
It's a 49 inch, so not the same PPI. Different aspect ration too (32:9, so 5120x1440).
2 replies →
This is pretty close - https://www.samsung.com/us/computing/monitors/gaming/49--ody...
1 reply →
I'm looking for something similar, but not ultrawide. There's plenty of 27" 4k 100Hz+ monitors, but not the same in 30-36" sizes. I have 3x 24" monitors, and would like to replace one or two of them with one bigger screen that's higher resolution and faster.
1 reply →
My current dream monitor is a 42inch 3840x2400 or 7680x4800 - 120hz screen. I love the 16:10 aspect ratio and real estate of a monitor like that. I’m currently running 3 24inch monitors in portrait mode, so 3600x1920
1 reply →
I'm waiting for the same thing. Hoping LG actually has this in the works, as their current 34" 5120 x 2160 has been out of stock for a while now, and they make a bunch of other high refresh rate displays.
I am also going to be buying one of these as soon as they hit the market.
The LG 38GN950-B [0] gets close. 3840x1600 so not quite your wanted resolution but close - enough to fit most 4K movies if you remove the black bars. Personally I am hoping someone will make an OLED with this format.
[0] https://www.lg.com/us/monitors/lg-38gn950-b-gaming-monitor
Xiaomi Surface 34
I doubt you'll find any gaming monitors at that resolution for a while. The newest Nvidia GPUs can't hit high enough frame rates at 4k to utilize 120Hz.
Why do you need 120Hz+ for productivity?
3 replies →
I bought a 165Hz gsync screen years ago for gaming.
I bought all of my other high-refresh screens because wow computing feels so much better in the day-to-day desktop because of it! Not joking in the least.
My brother swears by his 140hz display, I have used it for a bit and came away feeling underwhelmed - I prefer my 4K 10bit 60hz display
1 reply →
But the text isn't making it from your keyboard through the machine and to the monitor any faster. Seems like at 120hz you're just maybe going to catch it a frame earlier when it shows up.
It is, because there may be multiple things synchronizing their inputs and outputs to the refresh, which causes the refresh-related latency to be a number of frames. E.g. in some Linux compositors inputs are latched with the refresh, apps themselves may render in sync, and the compositor also draws in sync; the GPU driver would also introduce at least a frame of latency typically.
Another factor is that some (many?) 60 Hz displays buffer a whole frame themselves, and often don't have quick response times. If you go from a 10 ms response time IPS screen with a frame buffer to a 120 Hz gaming screen with 2-3 ms response time, you already got a difference of about 25 ms just in the screen itself.
8 ms is hard to notice. 50 ms less so.
The difference is pretty huge, even on systems that are much better tuned than Linux desktops (e.g. Windows 10).
That being said, while it is very nice and feels nice, it's not necessary for development work; I spend most of my days developing on a system over a VNC connection through a VPN, so the basic input lag of that setup is around 200-300 ms. Gnarly yes, but not particularly bad for text input. You get used to just do everything very slowly with the mouse.
4 replies →
In the article: > We get a 90 ms improvement from going from 24 Hz to 165 Hz.
As per the linked article, the observed delay improvement isn't one frame, more like ~3 frames. It doesn't go up from 1/24 to 1/165, it goes from 2.5/24 to 2.5/165.
Computer software waits a lot more than it did in 1977. That's why 240hz displays feel much more snappier even if it's supposed to be less noticeable -- you're waiting for same 3 frames, but they pass by much faster.
I guess so, the difference between 8ms vs 16ms is not much for 1 frame.
Though in my totally subjective experience it feels better.
Interestingly the person who did this latency test also did a keyboard latency test:
https://danluu.com/keyboard-latency/
Compared to the slowest keyboard measured it's possible to shave 45ms which if you were latency sensitive would be the biggest reduction.
4 replies →
yes, shaving ~8ms
And on the other side: I very much notice the difference between a corded mouse and a Bluetooth mouse (not a cheap one). It's not unusable, but frustrating enough that I prefer the corded ones. And I'm not a gamer, just doing office stuff, browsing and programming.
I have tried like 5 Bluetooth mice, since BL4.0 the latency has improved from 'terrible' to 'tolerable'. It might be something to do with the windows bluetooth stack, as i noticed the respobce was more sluggish during high CPU use, while USB mice seem unaffected.
The same logitech mouse performs much faster through their universal wireless adapter than trhough bluetooth (it has 2 modes), they also have lightspeed adapter, but I haven't noticed much difference.
If you haven't already, get a high refresh rate monitor and a 1000 reports/sec "gaming" mouse. The Razer Viper Mini is light, which adds to the feeling of responsiveness. I've bought a couple for other people and they love them.
I recently got a 165 Hz monitor for my decade-old PC (Sandy Bridge era) and with my (now old) G302 mouse, it's like having a new, much faster PC.
Wireless devices are still hit-or-miss for me. I've also found that my wireless keyboard and mouse actually perform worse if I have WiFi enabled. -_-
3 replies →
Maybe there is something there about IDE hints and whatnot too rendering faster, but those are usually bottlenecked by some async background work (language servers, linters, etc.) anyway which would negate the benefit of the screen refresh rate being faster
This is also can be enhanced by tweaking up your mouse sample rates and whatnot. I used to do a lot of setting PS/2 mouse sample rates at 80Hz or more in late 90s that made Windows machines feel almost as nice to use as a Mac.
Isn't PS/2 interrupt based, therefore instantaneous? I think USB has 125 Hz polling rate by default which can be increased to 1000 Hz.
1 reply →
Yeah, this is my favorite gimmick of my Samsung Galaxy S20. 120Hz display+240Hz touch sensor makes everything feels so fast and responsive, this is the first Android device that feels faster than my iPad.
Does a 240hz sensor help when the display responds at half that rate?
I guess it's because the latencies stack on top of each other?
1 reply →
out of curiosity, which iPad?
1 reply →
Some recent PC/laptop displays can do 100hz and I even feel a difference between that and the long-standard 60hz. It's a small effect but it's noticeable and it looks a bit smoother and better.
Scrolling, yes. Mouse, absolutely. Typing? Not really. I have tested this side-by-side.
You may not notice, that doesn't mean nobody does. Also it can depend on your operating system. Windows 10 (maybe everything now?) forces you to be in vsync, so increasing refresh rate has a non trivial improvement on the # of ms for a keypress response.
Now whether you will notice that can depend what environment you are in. If your editor already has an input latency of 100ms then shaving 8 off probably is not noticeable. But going from 20 to 12 might be.
3 replies →
the rate of persistence of the human eye is around 25Hz, hence PAL and NTSC framerates. the upper bound for those hypersensitive is around 50Hz, hence the later generation monitors. anything above this is nothing more than marketing hype. it seeming smoother is merely a placebo to justify the extra cost
Countless blind tests have shown a noticeable difference up to refresh rates over 100hz (and potentially greater). This is the first of many examples that I found: https://techreport.com/news/25051/blind-test-suggests-gamers...
"The human eye can't tell the difference past 30 FPS" was literally just a thought-killing cliche repeated by console gamers getting into internet slapfights with PC gamers.
You can see the difference for yourself here on any 60hz monitor: http://www.30vs60fps.com/
2 replies →
This is not correct. Let's not dispute and go with your claim of 25Hz.
Even then, the problem is that eyes don't work like cameras or monitors. Our eyes don't work with "frames". Each receptor updates on its own time. It's easy to see that, if the updates are staggered, multiple sensors could perceive higher frame-rates, even if they can't individually.
However, there's another angle to this. Disregarding input lag, which is a very real phenomenon and is greatly shortened by higher refresh, higher refresh rate monitors are able to show more 'discrete' steps for anything that's in motion. Our eyes (and brains) perceive this as movement 'smoothness', even if they can't quite make out every single frame that's displayed.
You should try that yourself. Do a blind test.
This is very incorrect. With video, every person can tell the difference between 30hz and 60hz, if they at least know what to look for.
The easiest way to show this is by wiggling your mouse around quickly on a computer screen.
At 30hz - you'll see the mouse teleporting around - hopping from spot to spot, but not moving. For example, if you stare in the middle, and jerk the mouse quickly to the right, you'll see the 4 or 5 spots where it rendered.
With 60hz, you'll see the 9 or 10 spots - and have a stronger illusion of movement.
With 120hz, it might even look as smooth as a real object flying across your screen.
1 reply →
Sorry you're wrong.
It's as absurd as saying it's impossible to tell the difference between a 55" 720p display compared to a 4k one at a distance of 1 foot away.
Are you claiming humans would fail an ABX test between a 60Hz and a 120Hz monitor? That sounds like a pretty extraordinary claim.
So does it seem smoother or does it not? I definitely notice the difference with 60fps video and 120Hz+ monitors.
Additionally, another thing I've noticed in the last decade is the very badly PWM-frequency-tuned LED headlights on some cars. Those engineers selected the wrong LED brightness and tuned it to terrible frequencies which leave flicker-trails when you look at or away from them.
Get those PWM frequencies above 500Hz please! Especially get above 200Hz with your LED frequencies at the very least.
~25Hz is the _lower bound_ for video* to appear smooth, _as long as there is motion blur_. But user interfaces do not add motion blur to moving objects. A computer monitor at 25Hz would be horrendous to use.
Lower framerates are less noticeable in low light, which is another reason why films look acceptable.
*Talented animators/cartoonists can get away with lower framerates
I love the new Windows Terminal (wt.exe) but this is one area where they really messed up. The latency between a keypress and a character appearing on screen makes the whole thing feel like a cheap experience. I have no idea what's causing the latency.
I suspect there some very big fancy rendering pipeline occurring, because when I open Windows Terminal I get the nVidia overlay popup which normally only comes up when I launch games, indicating the terminal is using a GPU-based rendering engine. Which I'm sure confers some interesting benefits, but it's a heavy price to pay when, at the end of the day, it's just a terminal.
I believe they do made a post about how conhost(the actual backend when you run cmd.exe from start) have such lower input latency compared to other windows terminals.
The cmd need to do a lot of things (font rendering, layout whatever) without help of other services (It need to work even without them, see the recovery mode). So it effectively bypass a lot of interactions with other services a normal program need to do in order to put the text on screen. It is basically the tty? in Windows.
But that also means it need to sacrifice a lot of things because that aren't available in recovery mode. You get no fancy emoji supports or whateve feature you would expect in a modern terminal. And the text rendering looks very bad until recently the remake the conhost.
Aren't games normally known for extremely low latencies?
Modern game engines buffer 3 or 4 frames, sometimes 5. Not unusual to have 140ms latency on 60hz screen between clicking mouse1 and seeing the muzzle flash.
https://www.youtube.com/watch?v=8uYMPszn4Z8 -- check at 6:30 the latency of 60fps vsync on 60hz. It's not even close to 16ms (1/60), it's ~118ms (7.1/60).
It's far cry from simplified pure math people think of when they think of fps in games or refresh rate for office and typing. Software is very very lazy lately, and most of the time these issues are being fixed by throwing more hardware at it, not fixing the code.
2 replies →
Nope. Games usually opt for deeper pipelining to help keep framerates higher if they are making any choice at all. They usually just run at whatever rate they run at, and don't really do "latency tuning." Which is where products like AMD's Anti-Lag ( https://www.amd.com/en/technologies/radeon-software-anti-lag ) and Nvidia's Reflex ( https://www.nvidia.com/en-us/geforce/news/reflex-low-latency... ) enter the picture to just give games a library to help with latency instead.
Games that are aiming more for a "cinematic narrative experience" might be perfectly fine with a few 33ms frames of latency, and a total input latency far exceeding 100ms. Competitive twitchy games will tend to be more aggressive. And VR games too, of course.
In principle, you can push GPU pipelines to very low latencies. Continually uploading input and other state asynchronously and rendering from the most recent snapshot (with some interpolation or extrapolation as needed for smoothing out temporal jitter) can get you down to total application-induced latencies below 10ms. Even less with architectures that decouple shading and projection.
Doing this requires leaving the traditional 'CPU figures out what needs to be drawn and submits a bunch of draw calls' model, though. The GPU needs to have everything it needs to determine what to draw on its own. If using the usual graphics pipeline, that would mean all frustum/occlusion culling and draw command generation happens on the GPU, and the CPU simply submits indirect calls that tell the GPU "go draw whatever is in this other buffer that you put together".
This is something I'm working on at the moment, and the one downside is that other games that don't try to clamp down on latency now cause a subtle but continuous mild frustration.
Yeah, I'm far from an expert on rendering and latency, but presumably game developers put a ton of effort into ensuring that the pixels are pushed with as little input latency as possible. This may not have been a priority for Microsoft in their terminal.
4 replies →
For games, consistent, smooth frame rates and vsync(no tearing) is more important than input lag so often times things will be buffered.
That said, the VR space has a much tighter tolerance on input lag and there's hardware based mitigations. Oculus has a lot of techniques such as "Asynchronous Spacewarp" which will calculate intermediate frames based on head movement(an input) and movement vectors storing the velocity of each pixel. They also have APIs to mark layers as head locked or free motion etc.
I have yet to find a well behaved GPU accelerated VTE.
alacritty?
6 replies →
contour?
https://github.com/christianparpart/contour
Kitty?
1 reply →
Now do WFH with remote desktop that is on a shared VM a few states away with a bunch of other devs. Its horrible.
With lockdown keeping me at home, and having to RDP to work over a VPN, I've gone quite far down the rabbit hole this year experimenting with low latency keyboards (1000Hz USB polling, configurable debounce times), gaming mice and high refresh rate monitors (125Hz+), all to try and make my remote experience feel more local.
The threshold for me (and most people I think) is about 110ms between key strike and something appearing on the screen before something feels non-local and disembodied... which is good because I found I could get down to ~40ms local response time on a decent windows box. My 2015 macbook is about ~120ms for the terminal app.
That leaves 70ms latency budget to play with which can accommodate the the RDP overhead (~50ms) and the network hop (~10ms for me).
This whole journey started with me wondering if I should pay a small fortune for a better network connection, before I realised that most of the latency was in my keyboard and monitor!
I even built a Arduino Leonardo gadget to measure the lag systematically and some pcap software to spot the RDP packets on the wire. The code's pretty shoddy but it does the trick! You can see it and an example result with RDP here:
https://github.com/willmuldrew/lagmeter
Another big factor in the experience is jitter - so it's a good idea to ensure everything is plugged in if you can. Mice, keyboards and ethernet.
In essence, even if you have to RDP/remote into work, potentially you can still make it feel pretty local - don't give up hope!
Nice, I didn't think that but it makes sense that network hop isn't so slow now. I am surprised RDP to work isn't as slow as I feared, the shared server they give us is a different problem. :)
Great! I never expected low latency devices works for remote latency mitigation
I have a project right now where I have to connect to a VPN, then remote-desktop to a "jumper PC" that's dual-homed via VNC to a segregated PC running Windows XP on a manufacturing tool we recently got out of mothball to backfill Covid supply chain gaps. On the plus side - the ancient version of Visual Studio on that tool is surprisingly snappy!
All my coworkers remote desktop into their workstations at the office from a laptop at home to do dev work. I can't do it. vscode remote development extension has been my savior when I need to code locally and then flip over to RDP to run the deployment from a specific environment (the office)
The workflow requires RDP? That sucks. Should build some command line equivalent.
1 reply →
Plug for mosh, the mobile shell. If you can ssh, you can probably mosh. https://mosh.org/
Our development environments have been azure VMs for ages.
Latency is not really an issue to be honest. I think ping is about 15ms which is much less than the lag in this article. Right now I'm using the horrible, 2012-era work laptop with just Chrome running and there are delays typing this comment into HN which is a super lean site as it is.
The main downsides are actually
* crap single-threaded performance (don't think Azure has VMs over 3ghz except maybe the highly expensive GPU ones)
* Slow IO
* Video and audio over RDP really doesn't work.. no hardware graphics accel either... reduced ability to work on multimedia, we had one webapp where WebGL wound up behaving differently which we didnt
Citrix has made a whole thing out of speeding up this scenario. It's a big part of why customers still use their software despite their other numerous failings.
Discussed at the time, 162 comments: https://news.ycombinator.com/item?id=16001407
This reminds me of graphing calculators in the late 90s when I was at school. I initially had a HP, but it took what felt like 500ms to respond when I pressed a key. I switched to a TI-83 and it was chalk and cheese - instant response; you could even write programs in assembly. There was a small cottage industry of traded programs for it.
HP's early graphing calculators definitely had very high latency. But they dealt with it properly, which PC's don't. An HP-48 series calculator will buffer dozens of keypresses, and handle them in-order as if the UI had been instantly responsive. So you don't actually have to wait for a menu or dialog box to be drawn to the screen—if you've learned the necessary input sequence to perform the task you want, you can enter it at full speed. The keyboards were also very high quality with great tactile feedback, so you didn't need visual feedback on the screen to know your key press had been registered.
I'd pile on that the programmability of the HPs was amazing compared to the TIs. The manual for my HP-48sx was a chonkyboi but man could I do so much with it. Once I figured out how to calculate pitch frequencies for equal temperament (just a mathematical series) I was using the BEEP function (IIRC) to have my calculator play music.
I own both hp50g and some TI calcs (86/89).
I'd describe hp and ti as laggy and slick, respectively. The 50g is what I used in university, but now I seldom touch it as the 86/89 are so much more pleasant to use.
4 replies →
PCs used to do this, before GUIs got all asynchronous on us. Try it with the old-old-old-old-style start menu (the one from NT 4 and 5).
It's still there!
https://ticalc.org/
Meta Kernel solved this issue, but yeah, that was a real problem.
If you're typing into a Slack window, I think you can multiply all that by 10.
Modern software is such garbage. We could make slack feel as good as counterstrike WRT input latency if we really cared hard enough about the end user experience. Unfortunately, chasing shiny (laggy) technologies is way more popular and marketable.
Electron is the new Flash. Bloat and lag. Using it is understandable as a tactical choice when it's expensive to maintain separate native apps everywhere but man, Slack is a 25 billion dollar company - surely they can write some native apps?
Same deal or worse with Teams, the performance of which is goddamn execrable. My laptop is an awful 2012 thing but it shouldn't take 5 seconds to switch between chats.
Check out ripcord: https://cancel.fm/ripcord/
I haven't used it with slack but it works just fine with discord. You don't get voice noise cancellation with ripcord but you don't need that if you use rtxvoice anyway.
Yes! Slack, Teams, Discord, Spotify. All of these are horrible. I hate to use them. Good thing is I only have to use one of them. I wish there was more modern low latency software.
Happens all the time:
https://mobile.twitter.com/programmer_just/status/1253098759...
i don't even bother editing my dumb typos because the latency of just hitting up + e has to be over 750 ms. Complete garbage.
Please please please point me to instructions on how to build a low latency PC.
I crave some good old fashioned responsive computing, but moreover I want to fill my classroom with these devices and show a new generation that, to borrow a catchphrase from a far more noble cause, it gets better.
Regarding monitors, OLEDs have a much faster response compared to LCDs. See this [1] for an example. As of why keyboard scanning has become so abysmally slow, it's beyond me. You'd think that this would be a solved problem even for cheap wired keyboards.
[1] https://youtu.be/x9n8Hz_RLqw
I would love for CRT monitors to make a comeback.
Sure, they are less portable and difficult to manufacture/repair. But their combination of refresh rate (without blur), image quality, and color reproduction are still second to none.
Considering the price of an nVidia 3080, which is standard PC gaming equipment, there must be a market for high end screens for gamers.
Until relatively recently I'd have agreed but at this point I'd take a modern display over a CRT assuming we're talking about high end. Popular options in the high end are displays like the LG 48 CX or some of the "G-Sync Ultimate" monitors like the Predator x27. These kinds of displays start at prices higher than a 3080 though so you don't hear about the market as much since people usually expect to spend less on a monitor than a GPU and the 3080 is generally consider the top of most high end price ranges.
> But their combination of refresh rate (without blur), image quality, and color reproduction are still second to none.
Not true. A current generation OLED (which will do 120hz) is superior to a CRT.
Even still, on many of those a good LCD has long been better than a good CRT. Color accuracy, for example. CRTs required constant fiddling & calibration to stay good at that, whereas a quality factory calibrated LCD will be much better in an "out of the box" or typical usage. Not really sure what you're calling "image quality" but it's hard to imagine that going any way but solidly in a modern high-end LCDs camp, which are brighter, wider color gamuts, higher resolution, and higher pixel densities than CRTs ever had.
> there must be a market for high end screens for gamers.
There is, which is why we have high resolution, high refresh rate, and adaptive vsync IPS monitors a plenty now. They come in all sorts of shapes & sizes, in resolutions & dimensions CRTs could only dream about.
And it could be streamlined. No need for a bazillion types.
Color and raster flexibility are the two bright spots for me.
Glowing phosphors in tubes is just a great tech.
I too like the feeling of xrays shooting into my eyes point blank for 12 hrs every day.
Electrons
1 reply →
At least I know I'm not wearing nostalgia goggles when I remember my early days on my Windows 95, and how fast the input responses were. It definitely feels as if input latency has degraded.
It definitely has - PS/2 is interrupt driven, while USB requires polling. This alone introduces enough lag to be felt during normal use.
The difference is big enough that people have hacked the drivers to let you overclock mouses: https://github.com/LordOfMice/hidusbf
I’ve recently upgraded from a gtx550ti (~10 y.o.) card to an RX 570 (3 y.o.), and even though I get way more performance, it introduced a noticeable inputlag in csgo.
It just doesnt feel right, and slowmo video capture confirms it.
Have you tried turning off vsync? I'm assuming you don't have a freesync monitor given the jump from nVidia to AMD.
CSGO is CPU bound, I'd suspect maybe it has something to do with the CPU rather than the GPU.
No other components in my setup have changed.
I also experience the lag on my windows desktop while dragging windows.
Did vsync get turned on somehow?
Additionally, the input lag may have been there all along but was not perceivable due to low performance.
9 replies →
I had a similar problem, here's a few things that I noticed affecting input lag:
- Bios HPET setting
- Monitor non-native resolution (monitor has to resize screen which adds lag)
- Control Flow Guard and Data Execution Prevention settings
It would be interesting to measure input latency on the new M1 Macs and add them to the table. It certainly feels like they’ve managed to claw a bit back.
I use both Microsoft Office 97 and LibreOffice for writing. The former runs on virtualized Windows Me. The latter runs "native".
Keypress responsiveness when typing is night and day, and given where I'm posting this comment, you can guess who has the advantage.
Platform: GNU(Linux)Mint Xfce
The former runs on virtualized Windows Me
...which is, like the rest of the DOS-based Windows from Windows/386 onwards, itself a sort of hypervisor architecture --- DOS applications effectively run in VMs thanks to V86 mode, giving native performance. The difference is very noticeable if you open something like MS-DOS EDIT on Win9x vs. NT 4/2K/XP --- the latter runs in an emulated environment and the latency is enormous.
I can't guess
The hint is in the configuration.
The Windows setup is convoluted, awkward, and weird sounding compared to the native app. They are using it for a reason though: it’s much much more responsive than native Linux LibreOffice.
1 reply →
Microsoft Office 97 most probably
i'm sorry, have you seen someone about that?
Open Office feels a bit faster than Libre Office.
FWIW, why the Windows Terminal is so responsive from its collaborator: https://github.com/microsoft/terminal/issues/327#issuecommen...
This was posted here few times but it's still an interesting read in this context.
One thing I haven't seen is how does the M1 fare in terms of input latency (e.g. in text editors, like in those old tests done with vim, sublime, intellij, etc: https://pavelfatin.com/typing-with-pleasure/).
I have a hunch it eliminates some...
> Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen.
The Blackberry q10 runs Blackberry OS 10, which is based on QNX, which has a realtime microkernel.
I'm surprised the Apple 2e's is so high, given the seemingly simplistic nature of the machine. I'm also surprised the TI-99/4A's isn't higher, given how complex it is - with its BASIC interpreter implemented on top of a virtual machine.
Probably just the nature of the sort of chips you could buy at the time and their scan rates and debounce circuitry. As time marches on we have added a lot of abstraction into the mix. One thing I used to do to make old DOS computers 'feel faster' was to turn up the keyboard rates in the BIOS.
TI's throughput certainly was slower. It's been a few decades, but I'm pretty sure it was possible to type faster than the interpreter could keep up. (As a point of reference, listing a program was unbearably slow on the TI, and unreadably fast on the //e.)
I suspect it's just that input processing is so cheap on those early architectures, that latency is really dominated by hardware constraints and not by CPU (however slow it might be). Certainly, on the //e, I'm pretty sure the input routine did little more than copy the character to the input buffer and the screen buffer and advance the cursor. The TI's probably was not much more complicated, despite being implemented in bytecode.
In hindsight, it validates the extremely expensive purchase my father made 35 years ago ;) I loved that computer!
Did you wiggle the mouse while measuring? ;)
Back in the day (early 80's) I remember using a VAX 780 @ 1 MIPS via time-sharing with 60 other simultaneous users.
Editing with EDT felt quite snappy. Amazing.
I feel the latency when connecting over a VPN to a remote computer and using RDP to a computer on that network. It is not a pleasant experience.
A few years ago I had work on a thin-client for a client. After a while I didn't notice, I think my brain just got slower, heh.
Latency to “the cloud” is 10-20ms for most people in the US. (Assuming they’re connecting directly to the closest regional data center.)
Your assumption is very optimistic for a number of people: a generation of security policy wanted everything routed through central monitoring points so you get home network -> vpn -> corporate data center -> remote server and back, all in the hope that the monitoring system would see malicious traffic (since attackers know how to use encryption & obfuscation, this is usually easily bypassed).
Too bad the author (perhaps strategically) dodged the two seminal for their era microcomputer braches, the 16-bit Ataris and the Amigas.
I once read that properly written Amiga software achieved around 16ms latency and would like to see that verified or debunked.
I once got so frustrated with the typing latency of editing a Confluence page that I remarked to my coworkers that the "baud rate is unacceptable".
I remember writing code on a pdp-8 where we had to write code to display the character which was typed.
In the appendix they mention something interesting - that apples CPU performance advantage isn’t magic, but very careful planning and testing on the right things, namely the entire App Store’s contents. That’s a huge dataset of real world applications you can test your designs against.
Huh. I don't remember the Palm Pilot being noticeably slow.
I love this. Thanks for sharing your measurements!
Wonderful article! Thanks!
Test
I wished computer/OS makers would consider this more. Frankly, I don’t see why this isn’t in the region of one refresh rate period of the display.
This is interesting but perhaps shows that keyboard/screen latency isn't as important as it once was.
Just about every application on Apple 2e relied on keypresses being shown on the screen. Modern devices and operating systems are displaying 'windows', streaming 3d graphics etc. and running a myriad of other services at the same time.
Sure, the computer is doing a lot more. Apps are still responding to key presses or mouse clicks though. Fortnite, AutoCad or Excel: I press a button I want to see the effect. Increasing delay strictly reduces your ability to control the system via feedback.
Even in a mostly passive app like a video player, it sucks if the play/pause button has a long-delayed effect. The tendency is to press the button over and over and feel correctly that we have lost control of the app.
Maybe it's a self-fulfilling prophecy kind of thing, do we have high speed games today as we used to, e.g. [1]? If game developers assume shitty hardware and software, they won't make things that require better latency.
[1] https://www.youtube.com/watch?v=J1TDNliM99U
First of all, Quake III was released in 1999 - that's way after the low-latency days already.
Secondly, not every PC in 1999 was capable of even running QIII (let alone at a high frame rate).
During the days of true low-latency gaming, actual game frame rates were in the low 20s and even consoles rarely achieved 50 or 60 fps.
1 reply →