Comment by dragontamer
14 hours ago
Triple decoder is one unique effect. The fact that Intel managed to get them lined up for small loops to do 9x effective instruction issue is basically miraculous IMO. Very well done.
Another unique effect is L2 shared between 4 cores. This means that thread communications across those 4 cores has much lower latencies.
I've had lots of debates with people online about this design vs Hyperthreading. It seems like the overall discovery from Intel is that highly threaded tasks use less resources (cache, ROPs, etc. etc).
Big cores (P cores or AMD Zen5) obviously can split into 2 hyperthread, but what if that division is still too big? E cores are 4 threads of support in roughly the same space as 1 Pcore.
This is because L2 cache is shared/consolidated, and other resources (ROP buffers, register files, etc. etc.) are just all so much smaller on the Ecore.
It's an interesting design. I'd still think that growing the cores to 4way SMT (like Xeon Phi) or 8way SMT (POWER10) would be a more conventional way to split up resources though. But obviously I don't work at Intel or can make these kinds of decisions.
> The fact that Intel managed to get them lined up for small loops to do 9x effective instruction issue is basically miraculous IMO
Not just small loops. It can reach 9x instruction decode on almost any control flow pattern. It just looks at the next 3 branch targets and starts decoding at each of them. As long as there is a branch every 32ish instructions (presumably a taken branch?), Skymont can keep all three uop queues full and Rename/dispatch can consume uops at a sustained rate of 8 uops per cycle.
And in typical code, blocks with more than 32 instructions between branches are somewhat rare.
But Skymont has a brilliant trick for dealing with long runs of branchless code too: It simply inserts dummy branches into the branch predictor, breaking them into shorter blocks that fit into the 32 entry uop queues. The 3 decoders will start decoding the long block at three different positions, leap-frogging over each-other until the entire block is decoded and stuffed into the queues.
This design is absolutely brilliant. It seems to entirely solve the issue decoding X86, with far less resources than a uop cache. I suspect the approach will scale to almost unlimited numbers of decoders running in parallel, shifting the bottlenecks to other parts of the design (branch prediction and everything post decode)
Thanks for the explanation. I was wondering how the heck Intel did to make a 9-way decode x86–a low power core of all things. Seems like an elegant approach.
The important bit: Intel E-cores now have 3x decoders each with the ability for 3-wide decode. When they work as a team, they can perform 9 decodes per clock tick (which then bottlenecks to 8 renamed uops in the best case scenario, and more than likely ~4 or ~3 more typical uops).
1 reply →
While the frontend of Intel Skymont, which includes instruction fetching and decoding, is very original and unlike to that of any other CPU core, the backend of Skymont, which includes the execution units, is extremely similar to that of Arm Cortex-X4 (which is a.k.a. Neoverse V3 in its server variant and as Neoverse V3AE in its automotive variant).
This similarity consists in the fact that both Intel Skymont and Arm Cortex-X4 have the same number of execution units of each kind (and there are many kinds of execution units).
Therefore it can be expected that for any application whose performance is limited by the CPU core backend, the CPU cores Intel Skymont and Arm Cortex-X4 (or Neoverse V3) should have very similar performances.
Moreover, Intel Skymont and Arm Cortex-X4 have the same die area, i.e. around 1.7 square mm (including with both cores 1 MB of L2 cache in this area). Therefore the 2 cores not only should have about the same performance for backend-limited applications, but they also have the same cost.
Before Skymont, all the older Intel Atom cores had been designed to compete with the medium-size Arm Cortex-A7xx cores, even if the Intel Atom cores have always lagged in performance Cortex-A7xx by a year or two. For instance Intel Tremont had a very similar performance to Arm Cortex-A76, while Intel Gracemont and Crestmont have an extremely similar core backend with the series of Cortex-A78 to Cortex-A725 (like Gracemont and Crestmont, the 5 cores in the series Cortex-A78, Cortex-A710, Cortex-A715, Cortex-A720 and Cortex-A725 have only insignificant differences in the execution units).
With Skymont, Intel has made a jump in E-core size, positioning it as a match for Cortex-X, not for Cortex-A7xx, like its predecessors.
>positioning it as a match for Cortex-X
Well the recent Cortex X5 or 925 is already at around 3.4mm2 so that comparison isn't exactly accurate. But I would love to test and see results on Skymont compared to X4. But I dont think they are available yet ( as an individual core ).
I am really looking forward to Clearwater Forest which is Skymont on 18A for Server.
And I know I am going to sound crazy but I wouldn't mind a small SoC based on Skymont and Xe2 Graphics for Smartphone to Tablets.
Like I have said, Intel Skymont is a very close match for Cortex-X4, not for Cortex-X925.
With Cortex-X925 Arm has made a big jump in core size, departing from the previous Cortex-X series, which has allowed a good increase in IPC, greatly improving the results of single-threaded benchmarks, but this has been paid by a much worse performance per area, making Cortex-X925 completely unsuitable for multi-threaded applications. Therefore Cortex-X925, like also Intel Lion Cove, is useful only when it is accompanied by smaller cores that handle the multi-threaded workloads.
So unlike with previous Arm cores, Cortex-X925 has not made Cortex-X4 obsolete, as demonstrated e.g. in MediaTek Dimensity 9400, which includes 1 Cortex-X925 to get good single-threaded benchmark scores, together with 3 Cortex-X4 to get good multi-threaded benchmark scores.
It is not clear which are the intentions of Arm for the evolution of the Cortex-X series. The rumors are that the next core configuration for smartphones is intended to be like that already deployed by Qualcomm with its custom cores, i.e. to have a big core that is 3 times bigger than the medium-size core and to use 2 big Cortex-X930 cores + 6 medium-size Cortex-A730 cores, for an even split in die area between the big cores and the medium-size cores.
For this to work well, Cortex-X930 must provide a good improvement in performance per area over Cortex-X925, because otherwise it would be hard to justify a 2+6 arrangement, when in the same die area one could have implemented a 1+9 configuration, with the same single-threaded performance, but with better multi-threaded performance.
I believe that a small SoC with only 4 Skymont cores and Xe2 graphics would provide performance, battery lifetime and cost for a smartphone that would be completely competitive with any existing Qualcomm, MediaTek or Samsung SoC.
This would be less obvious in a benchmark like GeekBench 6, where Cortex-X925 or Qualcomm Oryon L would show a greater single-threaded score, but the difference would not be great enough to actually matter in real usage. Also for multi-threaded performance measured by GB6, only 4 Skymont cores would seem to be a little slower than the current flagships, but that would be misleading, because 4 Skymont cores could run at full speed for long durations within the smartphone power constraints, while the current 8-core flagships can never run all 8 cores at the 100% performance recorded by GB6, without overheating after a short time.
An 8-core Skymont SoC would be excellent for a cheap tablet with long battery lifetime and great performance, even if again, such a configuration would be penalized by GB6, which favors having 1 huge core, like Cortex-X925, for the ST score, together with an over-provisioned set of medium-size cores, which can run all together only for the short time required to complete the GB6 sub-benchmarks, but in real prolonged usage must never be all completely busy at the same time, in order to avoid overheating.
Skymont is an improvement but...
Skymont area efficiency should be compared to Zen 5C on 3nm. It has higher IPC, SMT with dual decoders - one for each thread, and full rate AVX-512 execution.
AMD didn't have major difficulties in scaling down their SMT cores to achieve similar performance per area. But Intel went with different approach. At the cost of having different ISA support on each core in consumer devices and having to produce an SMT version of their P cores for servers anyway.
It should be noted that Intel Skymont has the same area and it should also have the same performance for any backend-limited application with Arm Cortex-X4 (a.k.a. Neoverse V3) (both use 1.7 square mm in the "3 nm" TSMC fabrication process, while a Zen 5 compact might have an almost double area in the less dense "4 nm" process, with full vector pipelines, and a 3 square mm area with reduced vector pipelines, in the same less dense process).
Arm Cortex-X4 has the best performance per area of among the cores designed by Arm. Cortex-X925 has a double area in comparison with Cortex-X4, which results in a much lower performance per area. Cortex-A725 is smaller in area, but the area ratio is likely to be smaller than the performance ratio (for most kinds of execution units Cortex-X4 has a double number, while for some it has only a 50% or a 33% advantage), so it is likely that the performance per area of Cortex-A725 is worse than for Cortex-X4 and for Skymont.
For any programs that benefit from vector instructions, Zen 5 compact will have a much better performance per area than Intel Skymont and Arm Cortex-X4.
For programs that execute mostly irregular integer and pointer operations, there are chances for Intel Skymont and Arm Cortex-X4 to achieve better performance per area, but this is uncertain.
Intel Skymont and Arm Cortex-X4 have a greater number of integer/pointer execution units per area than Zen 5 compact, even if Zen 5 compact were made with a TSMC process equally dense, which is not the case today.
Despite that, the execution units of Zen 5 compact will be busy a much higher percentage of the time, for several reasons. Zen 5 is better balanced, it has more resources for ensuring out-of-order and multithreaded execution, it has better cache memories. All these factors result in a higher IPC for Zen 5.
It is not clear whether the better IPC of Zen 5 is enough to compensate its greater area, when performing only irregular integer and pointer operations. Most likely is that in such cases Intel Skymont and Arm Cortex-X4 remain with a small advantage in performance per area, i.e. in performance per dollar, because the advantage in IPC of Zen 5 (when using SMT) may be in the range of 10% to 50%, while the advantage in area of Intel Skymont and Arm Cortex-X4 might be somewhere between 50% and 70%, had they been made with the same TSMC process.
On the other hand, for any program that can be accelerated with vector instructions, Zen 5 compact will crush in performance per area (i.e. in performance per dollar) any core designed by Intel or Arm.
> It seems like the overall discovery from Intel is that highly threaded tasks use less resources (cache, ROPs, etc. etc).
Does that mean if I can take a single-threaded program and split it into multiple threads, it might use less power? I have been telling myself that the only reason to use threads is to get more CPU power or to call blocking APIs. If they're actually more power-efficient, that would change how I weigh threads vs. async
Not... quite. I think you've got the cause-and-effect backwards.
Programmers who happen to write multiple-threaded programs don't need powerful cores, they want more cores. A Blender programmer calculating cloth physics would rather have 4x weaker cores than 1x P-core.
Programmers who happen to write powerful singled-threaded programs need powerful cores. For example, AMD's "X3D" line of CPUs famously have 96MB of L3 cache, and video games that are on these very-powerful cores have much better performance.
Its not "Programmers should change their code to fit the machine". From Intel's perspective, CPU designers should change their core designs to match the different kinds of programmers. Single-threaded (or low-thread) programmers... largely represented by the Video Game programmers... want P-cores. But not very much of them.
Multithreaded programmers... represented by Graphics and a few others... want E-cores. Splitting a P-core into "only" 2 threads is not sufficient, they want 4x or even 8x more cores. Because there's multiple communities of programmers out there, dedicating design teams to creating entirely different cores is a worthwhile endeavor.
--------
> Does that mean if I can take a single-threaded program and split it into multiple threads, it might use less power? I have been telling myself that the only reason to use threads is to get more CPU power or to call blocking APIs. If they're actually more power-efficient, that would change how I weigh threads vs. async
Power-efficiency is going to be incredibly difficult moving forward.
It should be noted that E-cores are not very power-efficient though. They're area efficient, IE Cheaper for Intel to make. Intel can sell 4x as many E-cores for roughly the same price/area as 1x P-core.
E-cores are cost-efficient cores. I think they happen to use slightly less power, but I'm not convinced that power-efficiency is their particular design goal.
If your code benefits from cache (ie: big cores), its probable that the lowest power-cost would be to run on large caches (like P-cores or Zen5 or Zen5 X3D). Communicating with RAM is always more power than just communicating with caches after all.
If your code does NOT benefit from cache (ie: Blender regularly has 100GB+ scenes for complex movies), then all of those spare resources on P-cores are useless, as nothing fits anyway and the core will be spending almost all of its time waiting on RAM to do anything. So the E-core will be more power efficient in this case.
> A Blender programmer calculating cloth physics would rather have 4x weaker cores than 1x P-core.
Don’t they really want GPU threads for that? You wouldn’t get by with just weaker cores.
3 replies →
>A Blender programmer calculating cloth physics would rather have 4x weaker cores than 1x P-core.
Nah, Blender programmer will prefer one core with AVX-512 instead of 4 without it.
4 replies →
> A Blender programmer calculating cloth physics would rather have 4x weaker cores than 1x P-core.
Is this true? In most of my work I'd usually rather have a single serializable thread of execution. Any parallelism usually comes with added overhead of synchronization, and added mental overhead of having to think about parallel execution. If I could freely pick between 4 IPC worth of single core or 1 IPC per core with 4 cores I'd pretty much always pick a single core. The catch is that we're usually not trading like for like. Meaning I can get 3 IPC worth of single core or 4 IPC spread over 4 cores. Now I suddenly have to weigh the overhead and analyze my options.
Would you ever rather have multiple cores or an equivalent single core? Intuitively it feels like there's some mathematics here.
4 replies →
I see "ROP" and immediately think of Return Oriented Programming and exploits...
Lulz, I got some wires crossed. The CPU resource I meant to say was ROB: Reorder Buffer.
I don't know why I wrote ROP. You're right, ROP means return oriented programming. A completely different thing.
"Another unique effect is L2 shared between 4 cores. This means that thread communications across those 4 cores has much lower latencies."
@dragontamer solid point. Consider a in memory ring shared between two threads. There's huge difference in throughput and latency if the threads share L2 (on same core) or when on different cores all down to the relative slowness of L3.
Are there other cpus (arm, graviton?) that have similarly shared L2 caches?
Hyperthreading actually shares L1 caches between two threads (after all, two threads are running in the same L1 cache and core).
I believe SMT4 and SMT8 cores from IBM Power10 also have L1 caches shared (8 threads on one core), and thus benefit from communication speeds.
But you're right in that this is a very obscure performance quirk. I'm honestly not aware of any practical code that takes advantage of this. E-cores are perhaps the most "natural" implementation of high-speed core-to-core communications though.
What we desperately need before we get too deep into this is stronger support in languages for heterogeneous cores in an architecture agnostic way. Some way to annotate that certain threads should run on certain types of cores (and close together in memory hierarchy) without getting too deep into implementation details.
I don't think so. I don't trust software authors to make the right choice, and the most tilted examples of where a core will usually need a bigger core can afford to wait for the scheduler to figure it out.
And if you want to be close together in the memory hierarchy, does that mean close to the RAM that you can easily get to? And you want allocations from there? If you really want that, you can use numa(3).
> without getting too deep into implementation details.
Every microarchitecture is a special case about what you win by being close to things, and how it plays with contention and distances to other things. You either don't care and trust the infrastructure, or you want to micromanage it all, IMO.
> I've had lots of debates with people online about this design vs Hyperthreading. It seems like the overall discovery from Intel is that highly threaded tasks use less resources (cache, ROPs, etc. etc).
AMD did something similar before. Anyone don't remember the Bulldozer cores sharing resources between pair of cores ?
AMD's modern SMT implementation is just better than Bulldozer's decoder sharing.
Modern Zen5 has very good SMT implementations. Back 10 years ago, people mostly talked about Bulldozer's design to make fun of it. It always was intriguing to me but it just never seemed to be practical in any workflow.