← Back to context

Comment by JonChesterfield

2 years ago

Interesting idea. I'm not confident it holds, access time to memory is often described as some count of cycles. E.g. 3 cycles to L1, some number of hundreds to somewhere else in the hierarchy.

Memory also positioned at some distance from the CPU (or whatever silicon is doing the arithmetic), where copying from one place to another involves copying into and then back out of the CPU.

More memory is slower, but within a given level of the cache hierarchy, I'd guess access time to any memory to be constant. How much variation is there in latency as a function of physical address, within say system level ddr4?

The variation is not smooth in real systems, but just like you’ve noticed: it’s right there in the L1->L2->L3->RAM->Disk hierarchy.

Each one is physically bigger, further away, and higher latency.

We might one day have 1 PB memory systems with 1 TB of on-chip cache… but the larger memory will still need more space and be further away…