← Back to context

Comment by monster_truck

11 hours ago

It really doesn't. In virtually every case the work is being completed faster than the cache can grow to that size. What little gains are being realized are from not having to wait for cores with access to the cache to become available.

> It really doesn't. In virtually every case the work is being completed faster than the cache can grow to that size.

If your tasks don’t benefit then don’t buy it.

But stop claiming that it doesn’t help anywhere because that’s simply wrong. I do some FEA work occasionally and the extra cache is a HUGE help.

There are also a lot of non-LLM AI workloads that have models in the size range than fit into this cache.

There are some very specific workloads (say simple object detection) that fit into cache and have crazy performance where the value of the cpu will be unbeatable, as the alternative is one of the cache epycs, everywhere else it'll only be small improvement if the software is not purpose made for it