← Back to context

Comment by EugeneOZ

4 years ago

Click-bait title again.

“You can lose some of your file changes in case of hard-reboot” is more correct.

It was a given truth for me all the time and I can tolerate some data losses if power was accidentally turned off for my desktop, or if OS panicked (it happens ~ once per year to me).

If this is a price for a 1000x speed increase - I’m more than happy they have implemented it this way.

You can lose some file changes even after asking the OS to make sure they don't get lost, the normal way.

That's a problem. It means e.g. transactional databases (which cannot afford to lose data like that) have a huge performance hit on these machines, since they have to use F_FULLFSYNC. And since that "no really, save my data" feature is not the standard fsync(), it means any portable software compiled for Linux will be safe, but will be unsafe on macOS, by default. That is a significant gotcha.

The question is why do other NVMe manufacturers not have such a performance penalty? 10x is fine; 1000x is not. This is something Apple should fix. It's a firmware problem.

  • No, it’s not a problem, it is expected. If you are running a transactional database on your desktop - at least add a UPS to your system.

    • The whole point of a transactional database is that even in the case of a power loss you do not lose data. If you UPS blows up, and so you lose power, you should not lose data.

      The point here is that on the apple systems if you do the correct thing your performance drops to that of spinning disks.

      22 replies →

The problem isn't that the default case is unsafe -- the problem is that the safe case is so extremely slow.

I guess I am old but the assumption I live by is that if power is suddenly cut from a computer - no matter desktop or laptop - it can damage the FS and/or cause data loss.

For any mission critical stuff, I have it behind a UPS.

  • At least your thinking is old. Modern filesystems and databases are designed to prevent data loss in that scenario.

    The last time I saw a modern filesystem eat itself on sudden power loss was when I was evaluating btrfs in a datacenter setting, and that absolutely told me it was not a reliable FS and we went with something else. I've never seen it happen with ext4 or XFS (configured properly) in over a decade, assuming the underlying storage is well-behaved.

    OTOH, I've seen cases of e.g. data in files being replaced by zeroes and applications crashing due to that (it's pretty common that zsh complains about .zsh_history being corrupted after a crash due to a trailing block of zeroes). This happens when filesystems are mounted with metadata journaling but no data journaling. If you use data journaling (or a filesystem designed to inherently avoid this, e.g. COW cases), that situation can't happen either. Most databases would be designed to gracefully handle this kind of situation without requiring systemwide data journaling though. That's a tradeoff that is available to the user depending on their specific use case and whether the applications are designed with that in mind or not.

  • Modern filesystems are designed so that you can unplug the hard drive and it will not be in a corrupted state.

    UPSes can and do fail.

  • No, modern filesystems aren't expected to be corrupted by sudden power loss, and "put it behind a ups" assumes that it's impossible for a UPS to fail.