Comment by monocasa

4 years ago

Last time I checked (which is a while at this point, pre SSD) nearly all consumer drives and even most enterprise drives would lie in response to commands to flush the drive cache. Working on a storage appliance at the time, the specifics of a major drive manufacturer's secret SCSI vendor page knock to actually flush their cache was one of the things on their deepest NDAs. Apparently ignoring cache flushing was so ubiquitous that any drive manufacturer looking to have correct semantics would take a beating in benchmarks and lose marketshare. : \

So, as of about 2014, any difference here not being backed by per manufacturer secret knocks or NDAed, one-off drive firmware was just a magic show, with perhaps Linux at least being able to say "hey, at least the kernel tried and it's not our fault". The cynic in me thinks that the BSDs continuing to define fsync() as only hitting the drive cache is to keep a semantically clean pathway for "actually flush" for storage appliance vendors to stick on the side of their kernels that they can't upstream because of the NDAs. A sort of dotted line around missing functionality that is obvious 'if you know to look for it'.

It wouldn't surprise me at all if Apple's NVME controller is the only drive you can easily put your hands on that actually does the correct things on flush, since they're pretty much the only ones without the perverse market pressure to intentionally not implement it correctly.

Since this is getting updoots: Sort of in defense of the drive manufacturers (or at least stating one of the defenses I heard), they try to spec out the capacitance on the drive so that when the controller gets a power loss NMI, they generally have enough time to flush then. That always seemed like a stretch for spinning rust (the drive motor itself was quite a chonker in the watt/ms range being talked about particularly considering seeks are in the 100ms range to start with, but also they have pretty big electrolytic caps on spinning rust so maybe they can go longer?), but this might be less of a white lie for SSDs. If they can stay up for 200ms after power loss, I can maybe see them being able to flush cache. Gods help those HMB drives though, I don't know how you'd guarantee access to the host memory used for cache on power loss without a full system approach to what power loss looks like.

Flush with other vendors at least does something as they block for some time too, just not as long as Apple.

Apple implementation is weird because actual amount of data written doesn't seem to affect flush time.

  • On at least one drive I saw, the flush command was instead interpreted as a barrier to commands being committed to the log in controller DRAM, which could cut into parallelization, and therefore throughput, looking like a latency spike but not a flush out of the cache.

On my benchmarking of some consumer HDD's, back in 2013 or so, the flush time was always what you'd expect based on the drive's RPM. I got no evidence the drive was lying to me. These were all 2.5" drives.

My understanding was, the capacitor thing on HDD's is to ensure it completely writes out a whole sector, so it passes the checksum. I only heard the flush cache thing with respect to enterprise SSD's. But I haven't been staying on top of things.