Comment by shadowpho
3 months ago
Here’s a page comparison 8 modern SSD cache, notice how they all fall off once the cache is full.
https://pcpartpicker.com/forums/topic/423337-animated-graphs...
3 months ago
Here’s a page comparison 8 modern SSD cache, notice how they all fall off once the cache is full.
https://pcpartpicker.com/forums/topic/423337-animated-graphs...
That has nothing to do with DRAM; that would be completely obvious if you stopped to think about the cache sizes implied by writing at 5-6GB/s for tens of seconds before speeds drop. Nobody's putting 100+ GB of DRAM on a single SSD. You get at most 1GB of DRAM per 1TB of NAND.
What those graphs illustrate is SLC caching: writing faster by storing one bit per NAND flash memory cell (imprecisely), then eventually re-packing that data to store three or four memory bits per cell (as is necessary to achieve the drive's nominal capacity). Note that this only directly affects write operations; reading data at several GB/s is possible even for data that's stored in TLC/QLC cells, and can be sustained for the entire capacity of the drive.
Interesting, that makes sense, but my point still stands: all 8 of those devices fall down 4-8x in speed at some point, meaning for sequential transfers speed falls off and can handle 4-8x less lanes
The performance drop due to SLC caching only applies to writes. Sequential reads (and often, even random reads at sufficiently high queue depth) will still more or less saturate the PCIe link. Most workloads and use cases read a lot more data than they write.