Comment by HorstG
6 years ago
Some features such as Raid5 were still firmly in "don't use if you value your data" territory last I looked. So it is important to be informed as to what can be used and what might be more dangerous with btrfs
6 years ago
Some features such as Raid5 were still firmly in "don't use if you value your data" territory last I looked. So it is important to be informed as to what can be used and what might be more dangerous with btrfs
Keep in mind that RAID5 isn’t feasible with multi-TB disks (the probability of failed blocks when rebuilding the array is far too high). That said, RAID6 also suffers the same write-hole problem with Btrfs. Personally I choose RAIDZ2 instead.
> Keep in mind that RAID5 isn’t feasible with multi-TB disks (the probability of failed blocks when rebuilding the array is far too high).
What makes you say that? I've seen plenty of people make this claim based on URE rates, but I've also not seen any evidence that it is a real problem for a 3-4 drive setup. Modern drives are specced at 1 URE per 10^15 bits read (or better), so less than 1 URE in 125 TB read. Even if a rebuild did fail, you could just start over from a backup. Sure, if the array is mission critical and you have the money, use something with more redundancy, but I don't think RAID5 is infeasible in general.
Last time I checked (a few years ago I must say), a 10^15 URE was only for enteprise-grade drives and not for consumer-level, where most drives have a 10^14 URE. Which means your build is almost guaranteed to fail on a large-ish raid setup. So yeah, RAID is still feasible with multi-TB disks if you have the money to buy disks with the appropriate reliability. For the common folk, raid is effectively dead with today's disk sizes.
2 replies →
Manufacturer-specified UBE rates are extremely conservative. If UBE were a thing then you'd notice transient errors during ZFS scrubs, which are effectively a "rebuild" that doesn't rebuild anything.
To be sure, it's entirely feasible, just not prudent with today's typical disk capacities.
Feasible is different than possible, and carries a strong connotation of being suitable/able to be done successfully. Many things are possible, many of those things are not feasible.
Btrfs has many more problems than dataloss with RAID5.
It has terrible performance problems under many typical usage scenarios. This is a direct consequence in the choice of core on-disc data structures. There's no workaround without a complete redesign.
It can become unbalanced and cease functioning entirely. Some workloads can trigger this in a matter of hours. Unheard of for any other filesystem.
It suffers from critical dataloss bugs in setups other than RAID5. They have solved a number of these, but when reliability is its key selling point many of us have concerns that there is still a high chance that many still exist, particularly in poorly-exercised codepaths which are run in rare circumstances such as when critical faults occur.
And that's only getting started...