Comment by monocasa
4 years ago
You definitely weren't testing the cache in a meaningful way if you were hovering over the same track.
WRT to the capacitor thing being about a single sector, think about the time spans. You should be able to even cut out the drive motor power, and still stay up for 100s of ms. In that time you can seek to a config track and blit out the whole cache. If you're already writing a sector you'll be done in microseconds. The whole track spins around every ~8ms at 7200RPM.
Tangential thinking out loud: this makes me think of a sort of interleaving or striping mechanism that tries to leave a small proportion of every track empty, such that ideal power loss flush scenarios would involve simply waiting for the disk to spin around to the empty/reserved area in the current track. On drives that aren't completely full, it's probably statistically reasonable that for any given track position there's going to be a track with some reserved space very close by, such that the amount of movement/power needed to seek there is smaller.
Of course, this approach describes a completely inverted complexity scenario in terms of sector remapping management, with the size of the associated tables probably being orders of magnitude larger. :<
Now I wonder how much power is needed for flash writes. The chances are an optimal-and-viable strategy would probably involve a bit of multichannel flash on the controller (and some FEC because why not).
Oooh... I just realized things'll get interesting if the non-volatile RAM thing moves beyond the vaporware stage before HDDs become irrelevant. Last-millimeter write caching will basically cease to be a concern.
But thinking about the problem slightly more laterally, I don't understand why nobody's made inline SATA adapters with RAM, batteries and some flash in them. If they intercept all writes they can remember what blocks made it to the disk, then flush anything in the flash at next power on. Surely this could be made both solidly/efficiently and cheaply...?
> But thinking about the problem slightly more laterally, I don't understand why nobody's made inline SATA adapters with RAM, batteries and some flash in them.
Hardware raid controllers with Battery Backup Units was really popular starting in the mid 90’s until maybe mid 2010’s? Software caught up in a lot of features and batteries failed often and required a lot more maintenance. Super caps were to replace the batteries but I think SSDs and software negated a ton of the value add. You can still buy them but they’re pretty rare to see in the wild.
I've heard of those, they sound mildly interesting to play with, if just to go "huh" at and move on. I get the impression the main reason they developed a... strained reputation was their strong tendency to want to do RAID things (involving custom metadata and other proprietaryness) even for single disks, making data recovery scenarios that much more complicated and stressful if it hadn't been turned off. That's my naive projection though, I (obviously) have no real-world experience with these cards, I just knew to steer far away from them (heh)
An inline widget (SATA on both sides) that just implements a write cache and state machine ("push this data to these blocks on next power on") seems so much simpler. You could even have one for each disk and connect to a straightforward RAID/SAS controller. (Hmm, and if you externalize the battery component, you could have one battery connect to several units...)
You are indeed right about the battery/capacitor situation ("you have to open the case?!"), I wouldn't be surprised if the battery level reporting in those RAID cards was far from ideal too lol
With all this being said, a UPS is by far the simplest solution, naturally, but also the most transiently expensive.