Comment by awesome_dude

14 days ago

> Question 2:

> Does throughput really matter more than latency in everyday application?

IME as a user, hell yes

Getting a video I don't mind if it buffers a moment, but once it starts I need all of that data moving to my player as quickly as possible

OTOH if there's no wait, but the data is restricted (the amount coming to my player is less than the player needs to fully render the images), the video is "unwatchable"

I don't mean to nitpick, but absolute values for both of these matter much less than how much it is compared to "enough". As long as the throughput is enough to prevent the video from stuttering, it doesn't matter if the data is moved to your video player program at 1 GB/s or 1 TB/s. Conversely, you say you don't mind if a video buffers for a moment but I'm willing to bet there's some value of "a moment" where it becomes "too long". Nobody is willing to wait an hour buffering before their video starts.

The perception of speed in using a computer is almost entirely latency driven these days. Compare using `rg` or `git` vs loading up your banking website.

Hell no.

Linux desktop (and the kernel) felt awful for such a long time because everyone was optimizing for server and workstation workloads. Its the reason CachyOS (and before that Linux Zen and.. Licorix?) are a thing.

For good UX, you heavily prioritize latency over throughput. No one cares if copying a file stalls for a moment or takes 2 seconds longer if that ensures no hitches in alt tabbing, scrolling or mouse movement.

  • When Kon Colivas introduced a scheduler optimized for desktop latency, about 15 years ago, the amount of abuse he got from Linux developers was astonishing, and he ended up quitting for good. I remember compiling it on my laptop and noticing how it made a huge improvement in the useability of X and desktop environment.

  • How many talks have you seen at USENIX that care about UNIX as desktop OS?

    Exactly.

What's every day?

Exactly, lots of different things.

When I alt-tab I care about latency.

When I ssh I care about latency.

When I download a 25GB game I care about throughput for the download to a certain extent that is probably mainly ISP bound rather than local system bound. I don't care if the download takes 10 or 11 minutes as long as I can still use my system with zero delays meanwhile. And whether it takes 11 minutes of 3 hours depends on my ISP mostly. But being responsive to me while it downloads is local latency bound.

The Youtube example you have makes sense, sure.

This isn't what prioritizing throughput actually looks like in most scenarios.

In the example you gave the amount of read speed the user needs to keep up with a video is meager and greater read speed is meaningless beyond maintaining a small buffer.

You in fact notice more if your process is sometimes starved of CPU IO memory was waiting on swap etc. Conversely you would in most cases not notice near so much if the entire thing got slower even much slower if it's meager resources were quickly available to the thing you are doing right now.