Comment by marcan_42

4 years ago

You can lose some file changes even after asking the OS to make sure they don't get lost, the normal way.

That's a problem. It means e.g. transactional databases (which cannot afford to lose data like that) have a huge performance hit on these machines, since they have to use F_FULLFSYNC. And since that "no really, save my data" feature is not the standard fsync(), it means any portable software compiled for Linux will be safe, but will be unsafe on macOS, by default. That is a significant gotcha.

The question is why do other NVMe manufacturers not have such a performance penalty? 10x is fine; 1000x is not. This is something Apple should fix. It's a firmware problem.

No, it’s not a problem, it is expected. If you are running a transactional database on your desktop - at least add a UPS to your system.

  • The whole point of a transactional database is that even in the case of a power loss you do not lose data. If you UPS blows up, and so you lose power, you should not lose data.

    The point here is that on the apple systems if you do the correct thing your performance drops to that of spinning disks.

    • Add a secondary UPS. What will be the next excuse?

      It's ridiculous to think that in case of power loss you expect 100% data integrity - it might happen in the middle of the command execution. If the system should be unkillable, it should have an unkillable power source in the first place.

      21 replies →