ZFS 2.3 released with ZFS raidz expansion

5 days ago (github.com)

After years in the making ZFS raidz expansaion is finally here.

Major features added in release:

  - RAIDZ Expansion: Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.

  - Fast Dedup: A major performance upgrade to the original OpenZFS deduplication functionality.

  - Direct IO: Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.

  - JSON: Optional JSON output for the most used commands.

  - Long names: Support for file and directory names up to 1023 characters.

  • > RAIDZ Expansion: Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.

    More specifically:

    > A new device (disk) can be attached to an existing RAIDZ vdev

  • So if I’m running a Proxmox on ZFS and NVMEs, will I be better off enabling Direct IO when 2.3 gets rolled out? What are the use cases for it?

    • Direct IO useful for databases and other applications that use their own disk caching layer. Without knowing what you run in Proxmox no one will be able to tell you if it's beneficial or not.

  • But I presume it is still not possible to remove a vdev.

  • How well tested is this in combination with encryption?

    Is the ZFS team handling encryption as a first class priority at all?

    ZFS on Linux inherited a lot of fame from ZFS on Solaris, but everyone using it in production should study the issue tracker very well for a realistic impression of the situation.

    • Main issue with encryption is occasional attempts by certain (specific) Linux kernel developer to lockout ZFS out of access to advanced instruction set extensions (far from the only weird idea of that specific developer).

      The way ZFS encryption is layered, the features should be pretty much orthogonal from each other, but I'll admit that there's a bit of lacking with ZFS native encryption (though mainly in upper layer tooling in my experience rather than actual on-disk encryption parts)

      3 replies →

    • The new features should interact fine with encryption. They are implemented at different parts of ZFS' internal stack.

      There have been many man hours put into investigating bug reports involving encryption and some fixes were made. Unfortunately, something appears to be going wrong when non-raw sends of encrypted datasets are received by another system:

      https://github.com/openzfs/zfs/issues/12014

      I do not believe anyone has figured out what is going wrong there. It has not been for lack of trying. Raw sends from encrypted datasets appear to be fine.

I just don't get it how the Windows world - by far the largest PC platform per userbase - still doesn't have any answer to ZFS. Microsoft had WinFS and then ReFS but it's on the backburner and while there is active development (Win11 ships some bits time to time) release is nowhere in sight. There are some lone warriors trying the giant task of creating a ZFS compatibility layer with some projects, but they are far from being mature/usable.

How come that Windows still uses a 32 year old file system?

  • To be honest, the situation with Linux is barely better.

    ZFS has license issues with Linux, preventing full integration, and Btrfs is 15 years in the making and still doesn't match ZFS in features and stability.

    Most Linux distros still use ext4 by default, which is 19 years old, but ext4 is little more than a series of extensions on top of ext2, which is the same age as NTFS.

    In all fairness, there are few OS components that are as critical as the filesystem, and many wouldn't touch filesystems that have less than a decade of proven track record in production.

    • ZFS might be better then any other FS on Linux (I don't judge that).

      But you must admit that the situation on Linux is quite better then on Windows. Linux has so many FS in main branch. There is a lot of development. BTRFS had a rocky start, but it got better.

    • I’m interested to know what ‘full integration’ does look like, I use ZFS in Proxmox (Debian-based) and it’s really great and super solid, but I haven’t used ZFS in more vanilla Linux distros. Does Proxmox have things that regular Linux is missing out on, or are there shortcomings and things I just don’t realise about Proxmox?

      9 replies →

    • as far as stability goes, btrfs is used by meta, synology and many others, so I wouldn't say it's not stable, but some features are lacking

      25 replies →

    • > Btrfs [...] still doesn't match ZFS in features [...]

      Isn't the feature in question (array expansion) precisely one which btrfs already had for a long time? Does ZFS have the opposite feature (shrinking the array), which AFAIK btrfs also already had for a long time?

      (And there's one feature which is important to many, "being in the upstream Linux kernel", that ZFS most likely will never have.)

      4 replies →

    • > Most Linux distros still use ext4 by default, which is 19 years old, but ext4 is little more than a series of extensions on top of ext2, which is the same age as NTFS.

      However, ext4 and XFS are much more simpler and performant than BTRFS & ZFS as root drives on personal systems and small servers.

      I personally won't use either on a single disk system as root FS, regardless of how fast my storage subsystem is.

      22 replies →

    •   https://openzfs.github.io/openzfs-docs/Getting%20Started/index.html
      

      ZFS runs on all major Linux distros, the source is compiled locally and there is no meaningful license problem. In datacenter and "enterprise" environments we compile ZFS "statically" with other kernel modules all the time.

      For over six years now, there is an "experimental" option presented by the graphical Ubuntu installer to install the root filesystem on ZFS. Almost everyone I personally know (just my anecdote) chooses this "experimental" option. There has been an occasion here and there of ZFS snapshots taking up too much space, but other than this there have not been any problems.

      I statically compile ZFS into a kernel that intentionally does not support loading modules on some of my personal laptops. My experience has been great, others' mileage may (certainly will) vary.

    • You've been able to add and remove devices at will for a long time with btrfs (only recently supported in zfs with lots of caveats)

      Btrfs also supports async/offline dedupe

      You can also layer it on top of mdadm. Iirc zfs strongly discourages using anything but direct attached physical disks.

    • >ZFS has license issues with Linux, preventing full integration

      No one wants that, openZFS is much healthier without Linux and it's "Foundation/Politics".

      7 replies →

  • > How come that Windows still uses a 32 year old file system?

    Simple. Because most of the burden is taken by the (enterprise) storage hardware hosting the FS. Snapshots, block level deduplication, object storage technologies, RAID/Resiliency, size changes, you name it.

    Modern storage appliances are black magic, and you don't need much more features from NTFS. You either transparently access via NAS/SAN or store your NTFS volumes on capable disk boxes.

    On the Linux world, at the higher end, there's Lustre and GPFS. ZFS is mostly for resilient, but not performance critical needs.

  • > I just don't get it how the Windows world - by far the largest PC platform per userbase - still doesn't have any answer to ZFS.

    The mainline Linux kernel doesn't either, and I think the answer is because it's hard and high risk with a return mostly measured in technical respect?

    • Technically speaking, bcachefs has been merged into the Linux Kernel - that makes your initial assertion wrong.

      But considering it's had two drama events within 1 year of getting merged... I think we can safely confirm your conclusion of it being really hard

      14 replies →

  • Honest question. As an end user that uses Windows and Linux and does not uses ZFS, what I am missing?

    • Way better data security, resilience against file rotting. This goes for both HDDs or SSDs. Copy-on-write, snapshots, end to end integrity. Also easier to extend the storage for safety/drive failure (and SSDs corrupt in a more sneaky way) with pools.

      19 replies →

    • For a while I ran Open Solaris with ZFS as root filesystem.

      The key feature for me, which I miss, is the snapshotting integrated into the package manager.

      ZFS allows snapshots more or less for free (due to copy on weite) including cron based snapshotting every 15 minutes. So if I did a mistake anywhere there was a way to recover.

      And that integrated with the update manager and boot manager means that on an update a snapshot is created and during boot one can switch between states. Never had a broken update, but gave a good feeling.

      On my home server I like the raid features and on Solaris it was nicely integrated with NFS etc so that one can easily create volumes and export them and set restrictions (max size etc.) on it.

      1 reply →

    • Much faster launch of applications/files you use regularly. Ability to always rollback updates in seconds if they cause issues thanks to snapshots. Fast backups with snapshots + zfs send/receive to a remote machine. Compressed disks, this both let's you store more on a drive and makes accessing files faster. Easy encryption. ability to mirror 2 large usb disks so you never have your data corrupted or lose it from drive failures. Can move your data or entire os install to a new computer easily by using a live disk and just doing a send/receive to the new pc.

      (I have never used dedup, but it's there if you want I guess)

    • Online filesystem checking and repair.

      Reading any file will tell you with 100% guarantee if it is corrupt or not.

      Snapshots that you can `cd` into, so you can compare any prior version of your FS with the live version of your FS.

      Block level compression.

      1 reply →

    • Snapshots (Note: NTFS does have this in the way of Volume Shadow Copy but it's not as easily accessible as a feature to the end user as it is in ZFS). Copy on Write for reliability under crashes. Block checksumming for data protection (bitrot)

  • NTFS was able to be extended in various way over the years to the point what you could do with an NTFS drive 32 years ago will feel like talking about a completely different filesystem than what you can do with it on current Windows.

    Honestly I really like ReFS, particularly in context of storage spaces, but I don't think it's relevant to Microsoft's consumer desktop OS where users don't have 6 drives they need to pool together. Don't get me wrong, I use ZFS because that's what I can get running on a Linux server and I'm not going to go run Windows Server just for the storage pooling... but ReFS + Storage Spaces wins my heart with the 256 MB slab approach. This means you can add+remove mixed sized drives and get the maximum space utilization for the parity settings of the pool. Here ZFS is still getting to online adds of same or larger drives 10 years later.

  • OS development pretty much stopped around 2000. ZFS is from 2001. I don't count a new way to organise my photos or integrate with a search engine as "OS" though.

  • The same reason file deduplication is not enabled for client Windows: greed.

    For example, there are numerous new file systems people use: OneDrive, Google Drive, iCloud Storage. Do you get it?

  • NTFS is good enough for most people, who have a laptop with one SSD in it.

    • The benefits of ZFS don't need multiple drives to be useful. I'm running ZFS on root for years now and snapshots have saved my bacon several times. Also with block checksums you can at least detect bitrot. And COW is always useful.

      3 replies →

It's good to see that they were pretty conservative about the expansion.

Not only is expansion completely transparent and resumable, it also maintains redundancy throughout the process.

That said, there is one tiny caveat people should be aware of:

> After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity).

  • I'm not sure that's really a caveat, it just means old data might be in an inoptimal layout. Even with that, you still get the full benefits of raidzN, where up to N disks can completely fail and the pool will remain functional.

    • I think it's a huge caveat, because it makes upgrades a lot less efficient than you'd expect.

      For example, home users generally don't want to buy all of their storage up front. They want to add additional disks as the array fills up. Being able to start with a 2-disk raidz1 and later upgrade that to a 3-disk and eventually 4-disk array is amazing. It's a lot less amazing if you end up with a 55% storage efficiency rather than 66% you'd ideally get from a 2-disk to 3-disk upgrade. That's 11% of your total disk capacity wasted, without any benefit whatsoever.

      4 replies →

  • Caveat is very much expected, you should expect ZFS features to not rewrite blocks. Changes to settings only apply to new data for example.

  • Yaeh it's a pretty huge caveat to be honest.

        Da1 Db1 Dc1 Pa1 Pb1
        Da2 Db2 Dc2 Pa2 Pb2
        Da3 Db3 Dc3 Pa3 Pb3
        ___ ___ ___ Pa4 Pb4
    

    ___ represents free space. After expansion by one disk you would logically expect something like:

        Da1 Db1 Dc1 Da2 Pa1 Pb1
        Db2 Dc2 Da3 Db3 Pa2 Pb2
        Dc3 ___ ___ ___ Pa3 Pb3
        ___ ___ ___ ___ Pa4 Pb4
    

    But as I understand it it would actually expand to:

        Da1 Db1 Dc1 Dd1 Pa1 Pb1
        Da2 Db2 Dc2 Dd2 Pa2 Pb2
        Da3 Db3 Dc3 Dd3 Pa3 Pb3
        ___ ___ ___ ___ Pa4 Pb4
    

    Where the Dd1-3 blocks are just wasted. Meaning by adding a new disk to the array you're only expanding free storage by 25%... So say you have 8TB disks for a total of 24TB of storage free originally, and you have 4TB free before expansion, you would have 5TB free after expansion.

    Please tell me I've misunderstood this, because to me it is a pretty useless implementation if I haven't.

    • ZFS RAID-Z does not have parity disks. The parity and data is interleaved to allow data reads to be done from all disks rather than just the data disks.

      The slides here explain how it works:

      https://openzfs.org/w/images/5/5e/RAIDZ_Expansion_2023.pdf

      Anyway, you are not entirely wrong. The old data will have the old parity:data ratio while new data will have the new parity:data ratio. As old data is freed from the vdev, new writes will use the new parity:data ratio. You can speed this up by doing send/receive, or by deleting all snapshots and then rewriting the files in place. This has the caveat that reflinks will not survive the operation, such that if you used reflinks to deduplicate storage, you will find the deduplication effect is gone afterward.

      1 reply →

    • Unless I misunderstood you, you're describing more how classical RAID would work. The RAID-Z expansion works like you note you would logically expect. You added a drive with four blocks of free space, and you end up with four blocks more of free space afterwards.

      You can see this in the presentation[1] slides[2].

      The reason this is sub-optimal post-expansion is because, in your example, the old maximal stripe width is lower than the post-expansion maximal stripe width.

      Your example is a bit unfortunate in terms of allocated blocks vs layout, but if we tweak it slightly, then

          Da1 Db1 Dc1 Pa1 Pb1
          Da2 Db2 Dc2 Pa2 Pb2
          Da3 Db3 Pa3 Pb3 ___
      

      would after RAID-Z expansion would become

          Da1 Db1 Dc1 Pa1 Pb1 Da2
          Db2 Dc2 Pa2 Pb2 Da3 Db3 
          Pa3 Pb3 ___ ___ ___ ___
      

      Ie you added a disk with 3 new blocks, and so total free space after is 1+3 = 4 blocks.

      However if the same data was written in the post-expanded vdev configuration, it would have become

          Da1 Db1 Dc1 Dd1 Pa1 Pb1
          Da2 Db2 Dc2 Dd2 Pa2 Pb2
          ___ ___ ___ ___ ___ ___
      

      Ie, you'd have 6 free blocks not just 4 blocks.

      Of course this doesn't count for writes which end up taking less than the maximal stripe width.

      [1]: https://www.youtube.com/watch?v=tqyNHyq0LYM

      [2]: https://openzfs.org/w/images/5/5e/RAIDZ_Expansion_2023.pdf

      3 replies →

This is huge news for ZFS users (probably mostly those in the hobbyist/home use space, but still). raidz expansion has been one of the most requested features for years.

  • I'm not yet familiar with zfs and couldn't find it in the release note: Does expansion only works with disk of the same size? Or is adding are bigger/smaller disks possible or do all disk need to have the same size?

    • You can use different sized disks, but RAID-Z will truncate the space it uses to the lowest common denominator. If you increase the lowest common denominator, RAID-Z should auto-expand to use the additional space. All parity RAID technologies truncate members to the lowest common denominator, rather than just ZFS.

      3 replies →

    • As far as I understand, ZFS doesn't work at all with disks of differing sizes (in the same array). So if you try it, it just finds the size of the smallest disk, and uses that for all disks. So if you put an 8TB drive in an array with a bunch of 10TB drives, they'll all be treated as 8TB drives, and the extra 2TB will be ignored on those disks.

      However, if you replace the smallest disk with a new, larger drive, and resilver, then it'll now use the new smallest disk as the baseline, and use that extra space on the other drives.

      (Someone please correct me if I'm wrong.)

      2 replies →

    • IIRC, you could always replace drives in a raidset with larger devices. When the last drive is replaced, then the new space is recognized.

      This new operation seems somewhat more sophisticated.

    • You need to buy the same exact drive with the same capacity and speed. Your raidz vdev be as small and as slow as your smallest and slowest drive.

      btrfs and the new bcachefs can do RAID with mixed drives, but I can’t trust either of them with my data yet.

      9 replies →

Note: This is online expansion. Expansion was always possible but you did need to take the array down to do it. You could also move to bigger drives but you also had to do that one at a time (and only gain the new capacity once all drives were upgraded of course)

As far as I know shrinking a pool is still not possible though. So if you have a pool with 5 drives and add a 6th, you can't go back to 5 drives even if there is very little data in it.

FINALLY!

You can do borderline insane single-vdev setups like RAID-Z3 with 4 disks (3 Disks worth of redundancy) of the most expensive and highest density hard drives money can buy right now, for an initial effective space usage of 25% and then keep buying and expanding Disk by Disk, with the space demand growing, up to something like 12ish disks. Disk prices dropping as time goes on and a spread out failure chance with disks being added at different times.

  • Yes but see my sibling comment.

    When you expand your array, your existing data will not be stored any more efficiently.

    To get the new parity/data ratios, you would have to force copies of the data and delete the old, inefficient versions, e.g. with something like this [1]

    My personal take is that it's a much better idea to buy individual complete raid-z configurations and add new ones / replace old ones (disk by disk!) as you go.

    [1] https://github.com/markusressel/zfs-inplace-rebalancing

How does ZFS compare to btrfs? I'm currently using btrfs for my home server, but I've had some strange troubles with it. I'm thinking about switching to ZFS, but I don't want to end up in the same situation.

  • I first tried btrfs 15 years ago with Linux 2.6.33-rc4 if I recall. It developed an unlinkable file within 3 days, so I stopped using it. Later, I found ZFS. It had a few less significant problems, but I was a CS student at the time and I thought I could fix them since they seemed minor in comparison to the issue I had with btrfs, so over the next 18 months, I solved all of the problems that it had that bothered me and sent the patches to be included in the then ZFSOnLinux repository. My effort helped make it production ready on Linux. I have used ZFS ever since and it has worked well for me.

    If btrfs had been in better shape, I would have been a btrfs contributor. Unfortunately for btrfs, it not only was in bad shape back then, but other btrfs issues continued to bite me every time I tried it over the years for anything serious (e.g. frequent ENOSPC errors when there is still space). ZFS on the other hand just works. Myself and many others did a great deal of work to ensure it works well.

    The main reason for the difference is that ZFS had a very solid foundation, which was achieved by having some fantastic regression testing facilities. It has a userland version that randomly exercises the code to find bugs before they occur in production and a test suite that is run on every proposed change to help shake out bugs.

    ZFS also has more people reviewing proposed changes than other filesystems. The Btrfs developers will often state that there is a significant man power difference between the two file systems. I vaguely recall them claiming the difference was a factor of 6.

    Anyway, few people who use ZFS regret it, so I think you will find you like it too.

  • btrfs has similar aims to ZFS, but is far less mature. i used it for my root partitions due to it not needing DKMS, but had many troubles. i used it in a fairly simple way, just a mirror. one day, of the drives in the array started to have issues- and btrfs fell on it's face. it remounted everything read-only if i remember correctly, and would not run in degraded mode by default. even mdraid would do better than this without checksumming and so forth. ZFS also likewise, says that the array is faulted, but of course allows it to be used. the fact the default behavior was not RAID, because it's literally missing the R part for reading the data back, made me lose any faith in it. i moved to ZFS and haven't had issues since. there is much more of a community and lots of good tooling around it.

  • I used Btrfs for a few years but switched away a couple years ago. I also had one or two incidents with Btrfs where some weirdness happened, but I was able to recover everything in the end. Overall I liked the flexibility of Btrfs, but mostly I found it too slow.

    I use ZFS on Arch Linux and overall have had no problems with it so far. There's more customization and methods to optimize performance. My one suggestion is to do a lot of research and testing with ZFS. There is a bit of a learning curve, but it's been worth the switch for me.

Happy to see the ARC bypass for NVMe performance. ZFS really fails to exploit NVMe's potential. Online expansion might be interesting. I tried to use ZFS for some very busy databases and ended up getting bitten badly by the fragmentation bug. The only way to restore performance appears to be copying the data off the volume, nuking it and then copying it back. Now -perhaps- if I expand the zpool then I might be able to reduce fragmentation by copying the tablespace on the same volume.

Can someone describe why they would use ZFS (or similar) for home usage?

  • Good reasons for me:

    Checksums: this is even more important in home usage as the hardware is usually of lower quality. Faulty controllers, crappy cables, hard disks stored in a higher than advised temperature... many reasons for bogus data to be saved, and zfs handles that well and automatically (if you have redundancy)

    Snapshots: very useful to make backups and quickly go back to an older version of a file when mistakes are made

    Ease of mind: compared to the alternatives, I find that zfs is easier to use and makes it harder to make a mistake that could bring data loss (e.g. remove by mistake the wrong drive when replacing a faulty one, pool becomes unusable, "ops!", put the disk back, pool goes back to work as nothing happened). Maybe it is different now with mdadm, ma when I used it years ago I was always worried to make a destructive mistake.

    • > Snapshots: very useful to make backups and quickly go back to an older version of a file when mistakes are made

      Piling on here: Sending snapshots to remote machines (or removable drives) is very easy. That makes snapshots viable as a backup mechanism (because they can exist off-site and offline).

  • To give an answer that nobody else has given, ZFS is great for storing Steam games. Set recordsize=1M and compression=zstd and you can often store about 33% more games in the same space.

    A friend uses ZFS to store his Steam games on a couple of hard drives. He gave ZFS a SSD to use as L2ARC. ZFS automatically caches the games he likes to run on the SSD so that they load quickly. If he changes which games he likes to run, ZFS will automatically adapt to cache those on the SSD instead.

    • The compression and ARC will make games load much master than they would on NTFS even without having a separate drive for the ARC.

    • As I understand, L2ARC doesn't work across reboots which unfortunately makes it almost useless for systems that get rebooted regularly, like desktops.

      2 replies →

  • I replicate my entire filesystem to a local NAS every 10 minutes using zrepl. This has already saved my bacon once when a WD_BLACK SN850 suddenly died on me [1]. It's also recovered code from some classic git blunders. It shouldn't be possible any more to lose data to user error or single device failure. We have the technology.

    [1]: https://chromakode.com/post/zfs-recovery-with-zrepl/

  • Several reasons, but major ones (for me) are reliability (checksums and self-healing) and portability (no other modern filesystem can be read and written on Linux, FreeBSD, Windows, and macOS).

    Snapshots ("boot environments") are also supported by Btrfs (my Linux installations use that so I don't have to worry about having the 3rd party kernel module to read my rootfs). Performance isn't that great either and, assuming Linux, XFS is a better choice if that is your main concern.

  • It's relatively easy, and yet powerful. Before that I had MDADM + LVM + dm-crypt + ext4, which also worked but all the layers got me into a headache.

    Automated snapshots are super easy and fast. Also easy to access if you deleted a file, you don't have to restore the whole snapshot, you can just cp from the hidden .zfs/ folder.

    I run it on 6x 8TB disk for a couple of years now. I run it in a raidz2, which means up to 2 disk can die. Would I use it on a single disk on a Desktop? Probably not.

    • > Would I use it on a single disk on a Desktop? Probably not.

      I do. Snapshots and replication and checksumming are awesome.

  • I have a home built NAS that uses ZFS for the storage array and the checksumming has been really quite useful in detecting and correcting bit rot. In the past I used MDADM and EXT over the top and that worked but it didn't defend against bit rot. I have considered BTRFS since it would get me the same checksumming without the rest of ZFS but its not considered reliable for systems with parity yet (although now I think it likely is more than reliable enough now).

    I do occasionally use snapshots and the compression feature is handy on quite a lot of my data set but I don't use the user and group limitations or remote send and receive etc. ZFS does a lot more than I need but it also works really well and I wouldn't move away from a checksumming filesystem now.

  • Apart from just peace of mind from bitrot, I use it for the snapshotting capability which makes it super easy to do backups. You can snapshot and send the snapshots to other storage with e.g zfs-autobackup and it's trivial and you can't screw it up. If the snapshots exist on the other drive, you know you have a backup.

  • I use it on a NAS for:

    - Confidence in my long-term storage of some data I care about, as zpool scrub protects against bit rot

    - Cheap snapshots that provide both easy checkpoints for work saved to my network share, and resilience against ransomware attacks against my other computers' backups to my NAS

    - Easy and efficient (zfs send) replication to external hard drives for storage pool backup

    - Built-in and ergonomic encryption

    And it's really pretty easy to use. I started with FreeNAS (now TrueNAS), but eventually switched to just running FreeBSD + ZFS + Samba on my file server because it's not that complicated.

  • I use it on my work laptop. Reasons:

    - a single solution that covers the entire storage domain (I don't have to learn multiple layers, like logical volume manager vs. ext4 vs. physical partitions) - cheap/free snapshots. I have been glad to have been able to revert individual files or entire file systems to an earlier state. E.g., create a snapshot before doing a major distro update. - easy to configure/well documented

    Like others have said, at this point I would need a good reason, NOT to use ZFS on a system.

  • I used it on my home NAS (4x3TB drives, holding all of my family's backups, etc.) for the data security / checksumming features. IMO it's performant, robust and well-designed in ways that give me reassurance regarding data integrity and help prevent me shooting myself in the foot.

  • > describe why they would use ZFS (or similar) for home usage

    Mostly because it's there, but also the snapshots have a `diff` feature that's occasionally useful.

  • I'm trying to find a reason not to use ZFS at home.

    • Requirement for enterprise quality disks, huge RAM (1 gig per TB), ECC, at least x5 disks of redundancy. None of these are things, but people will try to educate you anyway. So use it but keep it to yourself. :)

      8 replies →

  • I use ZFS for boot and storage volumes on my main workstation, which is primarily that--a workstation, not a server or NAS. Some benefits:

    - Excellent filesystem level backup facility. I can transfer snapshots to a spare drive, or send/receive to a remote (at present a spare computer, but rsync.net looks better every year I have to fix up the spare).

    - Unlike other fs-level backup solutions, the flexibility of zvols means I can easily expand or shrink the scope of what's backed up.

    - It's incredibly easy to test (and restore) backups. Pointing my to-be-backed-up volume, or my backup volume, to a previous backup snapshot is instant, and provides a complete view of the filesystem at that point in time. No "which files do you want to restore" hassles or any of that, and then I can re-point back to latest and keep stacking backups. Only Time Machine has even approached that level of simplicity in my experience, and I have tried a lot of backup tools. In general, backup tools/workflows that uphold "the test process is the restoration process, so we made the restoration process as easy and reversible as possible" are the best ones.

    - Dedup occasionally comes in useful (if e.g. I'm messing around with copies of really large AI training datasets or many terabytes of media file organization work). It's RAM-expensive, yes, but what's often not mentioned is that you can turn it on and off for a volume--if you rewrite data. So if I'm looking ahead to a week of high-volume file wrangling, I can turn dedup on where I need it, start a snapshot-and-immediately-restore of my data (or if it's not that many files, just cp them back and forth), and by the next day or so it'll be ready. Turning it off when I'm done is even simpler. I imagine that the copy cost and unpredictable memory usage mean that this kind of "toggled" approach to dedup isn't that useful for folks driving servers with ZFS, but it's outstanding on a workstation.

    - Using ZFSBootMenu outside of my OS means I can be extremely cavalier with my boot volume. Not sure if an experimental kernel upgrade is going to wreck my graphics driver? Take a snapshot and try it! Not sure if a curl | bash invocation from the internet is going to rm -rf /? Take a snapshot and try it! If my boot volume gets ruined, I can roll it back to a snapshot in the bootloader from outside of the OS. For extra paranoia I have a ZFSBootMenu EFI partition on a USB drive if I ever wreck the bootloader as well, but the odds are that if I ever break the system that bad the boot volume is damaged at the block level and can't restore local snapshots. In that case, I'd plug in the USB drive and restore a snapshot from the adjacent data volume, or my backup volume ... all without installing an OS or leaving the bootloader. The benefits of this to mental health are huge; I can tend towards a more "college me" approach to trying random shit from StackOverflow for tweaking my system without having to worry about "adult professional me" being concerned that I don't know what running some random garbage will do to my system. Being able to experiment first, and then learn what's really going on once I find what works, is very relieving and makes tinkering a much less fraught endeavor.

    - Being able to per-dataset enable/disable ARC and ZIL means that I can selectively make some actions really fast. My Steam games, for example, are in a high-ARC-bias dataset that starts prewarming (with throttled IO) in the background on boot. Game load times are extremely fast--sometimes at better than single-ext4-SSD levels--and I'm storing all my game installs on spinning rust for $35 (4x 500GB + 2x 32GB cheap SSD for cache)!

    • It's great to hear that you're using ZFSBootMenu the way I envisioned it! There's such a sense of relief and freedom having snapshots of your whole OS taken every 15 minutes.

      One thing that you might not be aware of is that you can create a zpool checkpoint before doing something 'dangerous' (disk swap, pool version upgrade, etc) and if it goes badly, roll back to that checkpoint in ZFSBootMenu on the Pool tab. Keep in mind though that you can only have one checkpoint at a time, they keep growing and growing, and a rollback is for EVERYTHING on the pool.

      1 reply →

Would love to use ZFS, but unfortunately Fedora just cant keep up with it...

  • I've been running Fedora on top of the excellent ZFSBootMenu[1] for about a year. You need to pay attention to the kernel versions supported by OpenZFS and might have to wait for support for a couple of weeks. The setup works fine otherwise.

    [1] https://docs.zfsbootmenu.org

Been running it since rc2. It’s insane how long this took to finally ship.

Can someone provide details on this bit please? "Direct IO: Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency".

ARC is based in RAM, so how could it reduce performance when used with NVMe devices? They are fast, but they aren't RAM-fast ...

  • Because with a (ARC) cache you have to copy from the app to the cache and then dma to disk. With direct io you can dma directly from the app ram to the disk.

The annual reminder that if Oracle wanted to contribute positively to the Linux ecosystem, they would update the CDDL license ZFS uses to GPL compatible.

  • This is the annual reply that Oracle cannot change the OpenZFS license because OpenZFS contributors removed the “or any later version” part of the license from their contributions.

    By the way, comments such as yours seem to assume that Oracle is somehow involved with OpenZFS. Oracle has no connection with OpenZFS outside of owning copyright on the original OpenSolaris sources and a few tiny commits their employees contributed before Oracle purchased Sun. Oracle has its own internal ZFS fork and they have zero interest in bringing it to Linux. They want people to either go on their cloud or buy this:

    https://www.oracle.com/storage/nas/

    • Is there a reason the OpenZFS contributors don't want to dual-license their code? I'm not too familiar with the CDDL but I'm not sure what advantage it brings to an open source project compared to something like GPL? Having to deal with DKMS is one of the reasons why I'm sticking with BTRFS for doing ZFS-like stuff.

      1 reply →

  • Oracle changing the license would not make a huge difference to OpenZFS.

    Oracle only owns the copyright to the original Sun Microsystems code. It doesn’t apply to all ZFS implementations (probably not OracleZFS, perhaps not IllumosZFS) but in the specific case of OpenZFS the majority of the code is no longer Sun code.

    Don’t forget that SunZFS was open sourced in 2005 before Oracle bought Sun Microsystems in 2009. Oracle have created their own closed source version of ZFS but outside some Oracle shops nobody uses it (some people say Oracle has stopped working on OracleZFS all together some time ago).

    Considering the forks (first from Sun to the various open source implementations and later the fork from open source into Oracle's closed source version) were such a long time ago, there is not that much original code left. A lot of storage tech, or even entire storage concepts, did not exist when Sun open sourced ZFS. Various ZFS implementations developed their own support for TRIM, or Sequential Resilvering, or Zstd compression, or Persistent L2ARC, or Native ZFS Encryption, or Fusion Pools, or Allocation Classes, or dRAID, or RAIDZ expansion long after 2005. That's is why the majority of the code in OpenZFS 2 is from long after the fork from Sun code twenty years ago.

    Modern OpenZFS contains new code contributions from Nexenta Systems, Delphix, Intel, iXsystems, Datto, Klara Systems and a whole bunch of other companies that have voluntarily offered their code when most of the non-Oracle ZFS implementations merged to become OpenZFS 2.0.

    If you'd want to relicense OpenZFS you could get Oracle to agree for the bit under Sun copyright but for the majority of the code you'd have to get a dozen or so companies to agree to relicensing their contributions (probably not that hard) and many hundreds of individual contributors over two decades (a big task and probably not worth it).

  • Honestly the cddl being incompatible with the gpl is one of the weirder statements to come out of the fsf. It comes up every time the cddl is mentioned but no one really knows why they are incompatible, it is basically "the fsf says they are incompatible" and when really pressed, they dithered until 2016 then came up with some hand waving that the incompatibility is some minutia as to what scope each license applies to.

    The whole thing smells of some FSF agenda to me. if you ship a cddl file in your gpl project it is still a gpl licensed project and the cddl file is still a cddl licensed file.