← Back to context

Comment by riku_iki

16 hours ago

> One day, the system froze and after reboot the partition was unrecoverably gone (the whole story[1]).

it looks like you didn't use raid, so any FS could fail in case of disk corruption.

Thank you for your opinion. Well... it did not just fail. Cryptsetup mounted everything fine, but the BTRFS tools did not find a valid filesystem on it.

While it could have been a bit flip that destroyed the whole encryption layer, BTRFS debugging revealed that there was some traces of BTRFS headers after mounting cryptsetup and some of the data on the decrypted partition was there...

This probably means the encryption layer was fine. The BTRFS part just could not be repaired or restored. The only explanation I have for this that something resulted in a dirty write, which destroyed the whole partition table, the backup partition table and since I used subvolumes and could not restore anything, most of the data.

Well, maybe it was my fault but since I'm using the exact same system with the same hardware right now (same NVMe SSD), I really doubt that.

  • > Well, maybe it was my fault but since I'm using the exact same system with the same hardware right now (same NVMe SSD), I really doubt that.

    anecdotes could be exchanged in both directions: I run heavy data processing with max possible throughput on top of btrfs raid for 10 years already, and never had any data loss. I am absolutely certain if you expect data integrity while relying on single disk: it is your fault.

    • The reliability is about variety of workloads, not amount of data or throughput. It's easy to write a filesystem which works well in the ideal case, it's the bad or unusual traffic patters which cause problem. For all that I know maybe that btrfs complete failure was because of kernel crash caused by bad USB hardware. Or there was a cosmic ray hitting memory chip.

      But you know who's fault is it? It's btrfs's one. Other filesystems don't lose entire volumes that easily.

      Over time, I've abused ext4 (and ext3) in all sorts of ways: override random sector, mount twice (via loop so kernel's double-mount detector did not work), use bad SATA hardware which introduced bit errors.. There was some data loss, and sometimes I had to manually sort though tens of thousands of files in "lost+found" but I did not lose the entire filesystem.

      I only saw the "entire partition loss" only happened to me when we tried btrfs. It was a part of ceph cluster so no actual data was lost.. but as you may guess we did not use btrfs ever again.

      1 reply →

What the hell are you talking about? Any filesystem on any OS I've seen the last 3 decades had some kind of recovery path after any crash. Some of them lose more data, some of them less. But being unable to mount, is a bug that makes a filesystem untrustworthy and useless.

And how would RAID help in that situation?

  • > But being unable to mount, is a bug that makes a filesystem untrustworthy and useless.

    we are in disagreement on this. If partition table entry corrupted, you can't mount without some low level surgery.

    > And how would RAID help in that situation?

    depending on raid, your data will be duplicated on another disk, and will survive in case of one/few disks corruption.