Comment by marcan_42

4 years ago

This affects T2 Macs too, which use the same NVMe controller design as M1 Macs.

We've looked at NVMe command traces from running macOS under a transparent hypervisor. We've issued NVMe commands outside of Linux from a bare-metal environment. The 20ms flush penalty is there for Apple's NVMe implementation. It's not some OS thing. And other drives don't have it. And I checked and Apple's NVMe controller is doing 10MB/s of DRAM memory traffic when issued flushes, for some reason (yes, we can get those stats). And we know macOS does not properly flush with just fsync() because it actively loses data on hard shutdowns. We've been fighting this issue for a while now, it's just that it only just hit us yesterday/today that there is no magic in macOS - it just doesn't flush, and doesn't guarantee data persistence, on fsync().

Ive just been scanning through linux kernel code (inc ext4). Are you sure that its not issuing a PREFLUSH? What are your barrier options on the mount? I think you will find these are going to be more like F_BARRIERFSYNC.

I couldnt find much info about it - but the official docs are here: https://kernel.org/doc/html/v5.17-rc3/block/writeback_cache_...

  • Those are Linux concepts. What you're looking for is the actual NVMe commands. There's two things: FLUSH (which flushes the whole cache), and a WRITE with the FUA bit set (which basically turns that write into write-through, but does not guarantee anything about other commands). The latter isn't very useful for most cases, since you usually want at least barrier semantics if not a full flush for previously completed writes. And that leaves you with FLUSH. Which is the one that takes 20ms on these drives.

    • > Those are Linux concepts. What you're looking for is the actual NVMe commands.

      Im not sure what commands are being sent to the NVMe drive. But what you are describing as a flush would be F_BARRIERFSYNC - NOT the F_FULLFSYNC which youve been benchmarking.

      3 replies →