Comment by shadowpho
16 hours ago
>Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers
Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.
Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.
But your point is valid that very few people actually notice a difference
You're pretty far off the mark about SSD caching. A majority of consumer SSDs are now DRAMless, and still can exceed PCIe 4.0 x4 bandwidth for sequential transfers. Only a seriously outdated SSD would still be using DDR3; good ones should be using LPDDR4 or maybe DDR4. And when a SSD does have DRAM, it isn't there for the sake of caching your data, it's for caching the driver's internal metadata that tracks the mapping of logical block addresses to physical NAND flash pages.
Here’s a page comparison 8 modern SSD cache, notice how they all fall off once the cache is full.
https://pcpartpicker.com/forums/topic/423337-animated-graphs...
That has nothing to do with DRAM; that would be completely obvious if you stopped to think about the cache sizes implied by writing at 5-6GB/s for tens of seconds before speeds drop. Nobody's putting 100+ GB of DRAM on a single SSD. You get at most 1GB of DRAM per 1TB of NAND.
What those graphs illustrate is SLC caching: writing faster by storing one bit per NAND flash memory cell (imprecisely), then eventually re-packing that data to store three or four memory bits per cell (as is necessary to achieve the drive's nominal capacity). Note that this only directly affects write operations; reading data at several GB/s is possible even for data that's stored in TLC/QLC cells, and can be sustained for the entire capacity of the drive.