Comment by Salgat
10 hours ago
To clarify, what gets scheduled is up to the OS or runtime, all you're doing is setting relative priority. If everything is all the same priority, then it's just as likely to all run on e cores.
10 hours ago
To clarify, what gets scheduled is up to the OS or runtime, all you're doing is setting relative priority. If everything is all the same priority, then it's just as likely to all run on e cores.
And then, what's the point?
A system that encourage everyone to jack everything up is pointless.
A system to tell the OS that the developer anticipates that data is shared and super hot will be mostly lied to (on accident or purpose).
There's the edge cases: database servers, HPC, etc, where you believe that the system has a sole occupant that can predict loading.
But libnuma, and the underlying ACPI SRAT/SLIT/HMAT tables are a pretty good fit for these use cases.
If you lie about the nature of your application, you'll only hurt performance in this configuration. You're not telling the OS what cores to run on, you're simply giving hints as to how the program behaves. It's no different than telling the threadpool manager how many threads to create or if a thread is long lived. It's a platform agnostic hint to help performance. And remember, this is all optional, just like the threadpool example that already exists in most major languages. Are you going to argue that programs shouldn't have access to core count information on the cpu too? They'll just shoot their foots as you said.
Again, there's already explicit ways for programs to show fine control; this stuff is already declared in ACPI and libnuma and higher level shims exist over it. But generally you want to know both how the entire machine is being used and pretty detailed information about working set sizes before attempting this.
Most things that have tried to set affinities have ended up screwing it up.
There's no need to put an easier user interface on the footgun or to make the footgun cross-platform. These interfaces provide opportunities for small wins (generally <5%) and big losses. If you're in a supercomputing center or a hyperscaler running your own app, this is worth it; if you're writing a DBMS that will run on tens of thousands of dedicated machines, it may be worth it. But usually you don't understand the way you'll be employed well enough to know if this is a win.