← Back to context

Comment by Salgat

14 hours ago

What we desperately need before we get too deep into this is stronger support in languages for heterogeneous cores in an architecture agnostic way. Some way to annotate that certain threads should run on certain types of cores (and close together in memory hierarchy) without getting too deep into implementation details.

OpenMP, Intel's TBB and other libraries/tools are clearly moving in this direction.

The main issue is that Intel is... well Intel. Even if they write a good library, there's probably 0% chance it'd work well on ARM systems their competitor. (And only a small chance that it'd be optimized for AMD).

------

Microsoft did put a lot of work into ConcRT, but it doesn't look very successful. Its a very clean model of task-based scheduling, but I'm not seeing too much buzz about it or too many blog posts marketing the benefits.

I don't think so. I don't trust software authors to make the right choice, and the most tilted examples of where a core will usually need a bigger core can afford to wait for the scheduler to figure it out.

And if you want to be close together in the memory hierarchy, does that mean close to the RAM that you can easily get to? And you want allocations from there? If you really want that, you can use numa(3).

> without getting too deep into implementation details.

Every microarchitecture is a special case about what you win by being close to things, and how it plays with contention and distances to other things. You either don't care and trust the infrastructure, or you want to micromanage it all, IMO.

  • I'm talking about close together in the cache. If a threadpool manager is hinted that 4 threads are going to share a lot of memory, they can be allocated on the same l2 cache. And no matter what, you're trusting software developers either way, whether it be at the app level, the language/runtime level, or the operating system level.

    • NUMA aware threading is somewhat rare but it does exist.

      Its just reaching into the high arts of high-performance that fewer-and-fewer programmers know about. I myself am not an HPC expert, I just like to study this stuff on the side as a hobby.

      So NUMA-awareness is when your code knows that &variable1 is located in one physical location, while &variable2 is somewhere else.

      This is possible because NUMA-aware allocators (numa_alloc in Linux, VirtualAlloc in Windows) can take parameters that guarantee an allocation within a particular NUMA zone.

      Now that you know certain variables are tied together in physical locations, you can also tie threads together with affinity to those same NUMA locations. And with a bit of effort, you can ensure that threads that are in one workpool share the same NUMA zones.

      ---------

      Now code-awareness of shared caches is less common. But following the same models of "abstracted work pools of thread-affinity + NUMA awareness of data", programmers have been able to ensure Zen1 cores to be working together with the same L3 cache.

      L2 cache with E-cores is new, but not a new concept in general. (IE: the same mechanisms and abstractions we used for thread-affinity checks on Zen cores sharing L3 cache, or multi-socket CPUs being NUMA Aware... all would still work for L2 cache).

      I don't know if the libraries support that. But I bet Intel's library (TBB) and their programmers are working on keeping their abstractions clean and efficient.