Comment by dsr_
6 years ago
On the surface, btrfs is pretty close to zfs.
Once you actually use them, you discover all the ways that btrfs is a pain and zfs is a (minor) joy:
- snapshot management
- online scrub
- data integrity
- disk management
I lost data from perfectly healthy-appearing btrfs systems twice. I've never lost data on maintained zfs systems, and I now trust a lot more data to zfs than I ever have to btrfs.
At least disk management is far easier with btrfs. You can restripe at will while zfs has severe limitations around resizing, adding and removing devices.
Granted, at enterprise scale this hardly matters because you can just send-receive to rebuild pools if you have enough spares, but for consumer-grade deployments it's a non-negligible annoyance.
Restriping is source of unsafety, though. A lot of ZFS data safety comes from the fact it doesn't support overwriting anything, making it so that normal operation can't introduce unrecoverable corruption. In fact, all writes are done through snapshots.
ZFS wanted to have that too (the mythical block pointer rewrite) but it never happend, instead they add clunky workarounds like indirection tables for.
1 reply →
Actually, this matters a lot in many enterprises. Beancounters hate excess capacities, so there are never enough spares and everything is always almost full.
Maybe SV is different...
Since the plural of anecdote is data, I'll provide mine here. ZFS is the only file-system from which I've lost data on hardware that was functioning properly, though that does come with a caveat.
Twice btrfs ended up in a non-mountable situation, but both times it was due to a known issue and #btrfs on freenode was able to walk me through getting it working again.
With ZFS, I neded up in a non-mountable system, and the response in both #zfs and #zfsonlinux to me posting the error message were, "that sucks, hope you had backups." Since I both had backups and it was my laptop 2000 miles from home that was my only computing device, I didn't dig deeper to see if I could discover the problem. FWIW, I've been using ZFS on that same hardware for almost 2 years since with no issues.
Thanks for your answer and sorry for your data loss.
> I lost data from perfectly healthy-appearing btrfs systems twice.
I still consider btrfs as beta-level software. This is why I never looked into it very seriously and asked this question.
Looks like btrfs has something around five years to be considered serious at the scale where ZFS just starting to warm-up.
The one thing I can't understand about btrfs is the unknown answer to the question "How much disk space do I have left?". I don't get that being a "this much, maybe" answer
# btrfs filesystem usage /
Overall:
"Free (estimated)"
btrfs is such a mess that for a database or VM to be marginally stable, you have to disable the CoW featureset for those files with the +C attribute. It's nowhere near a serious solution.