Comment by martinald

2 days ago

Two of the main ones actually aren't really DRAM related but how full the drive is.

For most (all?) SSD drives they need a good 20% of the drive free for garbage collection and wear levelling. Going over this means it can't do this "asynchronously" and instead has to do it as things are written, which really impacts speed.

Then on top of that on cheaper flash like TLC and QLC the drive can go much faster by having free space "pretend" to be SLC and write it in a very "inefficient" size wise but fast method (think a bit like striped RAID0, but instead of data reliability issues you get with that it only works when you have extra space available). Once it hits a certain threshold it can't pretend anymore as it uses too much space to write in an inefficient fast way and has to write it in the proper format.

These things are additive too so on cheaper flash things get very very slow. Learnt this the hard way some years ago when it would barely write out at 50% of HDD speeds.

> cheaper flash like TLC and QLC the drive can go much faster by having free space "pretend" to be SLC

I'm afraid I don't understand how exactly this makes it faster. In my head, storing fewer bits per write operation should decrease write bandwidth.

Of course we observe the opposite all the time, with SLC flash being the fastest of all.

Does it take significantly more time to store the electrical charge for any given 1-4 bits with the precision required when using M/T/QLC encoding?

  • In theory should be more efficient but in reality it's not. Any gains from 'modulating' efficiency are reduced by having to use very aggressive error correction and also write/read things multiple times (because the error rate is so high). I think QLC needs to do usually somewhere on the order of 8 "write/read" cycles for example to verify the data is written correctly.