Comment by marcan_42
4 years ago
At least your thinking is old. Modern filesystems and databases are designed to prevent data loss in that scenario.
The last time I saw a modern filesystem eat itself on sudden power loss was when I was evaluating btrfs in a datacenter setting, and that absolutely told me it was not a reliable FS and we went with something else. I've never seen it happen with ext4 or XFS (configured properly) in over a decade, assuming the underlying storage is well-behaved.
OTOH, I've seen cases of e.g. data in files being replaced by zeroes and applications crashing due to that (it's pretty common that zsh complains about .zsh_history being corrupted after a crash due to a trailing block of zeroes). This happens when filesystems are mounted with metadata journaling but no data journaling. If you use data journaling (or a filesystem designed to inherently avoid this, e.g. COW cases), that situation can't happen either. Most databases would be designed to gracefully handle this kind of situation without requiring systemwide data journaling though. That's a tradeoff that is available to the user depending on their specific use case and whether the applications are designed with that in mind or not.
how long ago did you last test btrfs?