Comment by mlyle
3 months ago
> where your standard pc will have thousands of cores
Thousands of non-GPU cores, intended to run normal tasks? I doubt it.
Thousands of special purpose cores running different programs like managing power, managing networks, managing RGB lighting around? Maybe, but that doesn't really benefit from this.
Thousands of cores including GPU cores? What you're talking about in labelling locality isn't sufficient to address this problem, and isn't really even a significant step towards its solution.
CPUs are trending towards heterogenous many core implementations. 16 core was considered server exclusive a few decades ago, now we're at heterogenous 24 core on an Intel 14900k cpu. The biggest limit right now is on the software side, hence my original comment. I wouldn't be surprised if someday the cpu and gpu become combined to overcome the memory wall, with many different types of specialized cores depending on the use case.
The software side is limited, somewhat intrinsically (there tend to be a lot of things we want to do in order--- Amdahl's law wins).
And even when you aren't intrinsically limited by that, optimal placement doesn't reduce contention that much (assuming you're not ping-ponging a single cache line every operation or something dumb like that).
But the hardware side, too: we're not getting transistors that quickly anymore, and we don't want anything too much smaller than an Intel E-core. Even if we stack 3D, all that net wafer area is not cheap and isn't cheapening quickly.