← Back to context

Comment by nine_k

2 days ago

Funny enough, Unix already has user-settable priorities, aka "nice level". ACPI gives us an idea how plentiful the power is.

So, when powered by AC power, schedule everything on P cores when possible, schedule processes that eat a lot of CPU on P cores, same for any process with a negative nice value.

When powered by a battery, schedule anything with non-negative nice value on E cores, keep one P core up for real-time tasks, and for nice-below-zero tasks.

These are two extremes, but I suppose that the idea is understandable.

> So, when powered by AC power, schedule everything on P cores when possible, schedule processes that eat a lot of CPU on P cores, same for any process with a negative nice value.

Even when plugged in, you may have thermal limitations. P cores will chew through your power budget more aggressively than E cores. For latency-sensitive workloads you do want to emphasize the P cores, but when throughput is the goal you'll usually be better off not ignoring the E cores, and not trying to run the P cores at high frequency where they're much less efficient. Intel started adding E cores to consumer chips in large part so they could score better on throughput-oriented multithreaded benchmarks like Cinebench; they're decent at compiling code, too, but you'll still want the P core for the linker.

  • Always personally disable turbo boost. Especially on laptops

    • Far better would be to tweak the time constants to your liking, so that you can use the full clock range of the chip, but constrain its sustained power draw for quiet and long battery life.

    • If I run a game, I limit CPU to about 50% clock speed.

      Only way to stop laptop getting crazy hot and fans meaningfully reducing pressure on desk of laptop...

That's not really how nice levels have worked traditionally, and would disallow specifying "run on Performance cores, but yield to other processes quickly".

I think that here is where things are lacking. There's not enough information that can be conveyed to the OS with just a number, and the number seems fixed and not tied to user input (active application, user just clicked, action blocking presentation).

It'd be cool if tasks told you about their workload in terms of latency throughput, and cadence required (hello skipping audio when you compile hard).

>when powered by AC power, schedule everything on P cores when possible

Sometime I feel like that is undesirable. It may make system consume more power, thus more heat output and louder.

I may be completely wrong, but I read that E cores are not power efficient, rather they are die space efficient.

  • They're both - though Intel has mostly talked up the power efficiency.

    For CPU's, those two types of efficiency are closely related. Omitted transistors (in an E core design) neither take up die space, nor consume power. And CPU cooling systems are ultimately measured by how many watts of heat they can remove from each unit of die area - so fewer watts from a smaller core. (That's at a given temperature difference, etc. But your die will die if any part of it gets too hot. And revving up the CPU cooling fan is generally not preferred.)