Comment by eikenberry

12 hours ago

Ext4 is still very popular as a solid, no frills filesystem. Btrfs is the primary alternative and still suffers from a poor reputation from their years of filesystem corruption bugs and hard to diagnose errors. ZFS and XFS only makes sense for beefier servers and all other filesystems have niche use cases or are still under development.

I don't consider myself a "believer" in anything, but as a sysadmin, if I see a deploy with ext4, I classify it as a newbie's choice or someone stuck in the 80s. It's not a matter of conviction; it's simply about managing your data:

- Transferable snapshots (zfs send) mean very low-cost backups and restores, and serious desktop users don't want to be down for half a day because a disk failed.

- A pool means effective low-cost RAID, and anyone in 2026 who isn't looking for at least a mirror for their desktop either doesn't care about their data or lacks the expertise to understand its purpose.

ZFS is the first real progress in storage since the 80s. It's the most natural choice for anyone who wants to manage their digital information. Unfortunately, many in the GNU/Linux world are stuck in another era and don't understand it. They are mostly developers whose data is on someone else's cloud, not on their own hardware. If they do personal backups, they do them halfway, without a proven restore strategy. They are average users, even if more skilled than average, who don't believe in disk failures or bit rot because they haven't experienced it personally, or if they have, they haven't stopped to think about the incident.

If you want to try out services and keep your desktop clean, you need a small, backup-able volume that can be sent to other machines eg. a home server, to be discarded once testing is done. If you want to efficiently manage storage because when something breaks, you don't want to spend a day manually reinstalling the OS and copying files by hand, you'll want ZFS with appropriate snapshots, whether managed with ZnapZend or something else doesn't really matter.

Unfortunately, those without operations experience don't care, don't understand. The possibility of their computer breaking isn't something they consider because in their experience it hasn't happened yet, or it's an exceptional event as exceptional that doesn't need automation. The idea of having an OS installed for 10 years, always clean, because every rebuild is a fresh-install and storage is managed complementarily, is alien to them. But the reality is that it's possible, and those who still understand operations really value it.

Those who don't understand it will hardly choose Guix or NixOS; they are people who play with Docker, sticking to "mainstream" distros like Fedora, Ubuntu, Mint, Arch. Those who choose declarative distros truly want to configure their infrastructure in text, IaC built-in into the OS, and truly have resilience, so their infrastructure must be able to resurrect from its configuration plus backups quickly and with minimal effort, because when something goes wrong, I have other things to think about than playing with the FLOSS toy of the moment.

  • I'll bite. I use NixOS as a daily driver and IMO makes the underlying FS type even less important. If my main drive goes I can bootstrap a new one by cloning my repo and running some commands. For my data, I just have some rsyc scripts that sling the bits to various locations.

    I suppose if I really wanted to I could put the data on different partitions and disks and use the native fs tools but it's a level of detail that doesn't seem to matter that much relative to what I currently have. I could see thinking about FS details much more for a dedicated storage server

    Fs level backups for an OS sounds more relevant when the OS setup is not reproducable and would be a pain to recreate.

    • Yes and no. ZFS is for managing your data with simplicity and efficiency that isn't possible with other "storage systems" on GNU/Linux. Setting up a desktop with mdraid+LUKS+LVM+the chosen filesystem is a way longer job than creating a pool with the configuration you want and the volumes you want. Managing backups without snapshots that can be sent over a LAN is a major hassle.

      Can it be done? Yes. Formally. But it's unlikely that anyone does it at home because between the long setup and maintaining it, there's simply too much work to do. Backing up the OS itself isn't very useful with declarative distros, but sometimes a rebuild fails because for example there's a broken package/derivation at that moment, so having a recent OS ready, a simple volume to send over LAN or pull from USB storage is definitely convenient. It's already happened to me a few times that I had to give up a rebuild for an update because something was broken upstream, few days and that's fixed but without an OS backup, if I'd had to do a restore at that moment, I would have been stuck.

  • I would certainly feel that way about an ext2 system. But ext4 was released in 2006

    • There's actually no substantial difference: it's the very concept of a mere filesystem that's obsolete. What's needed to manage your data is:

      - Lightweight/instant/accessible and transmittable (at block-level) snapshots, not just logical access

      - Integrated management of the underlying hardware, meaning support for various RAID types and dynamic volumes

      - Simplicity of management

      ZFS offers this, btrfs doesn't (even with LUKS + LVM, nor stratis); it has cumbersome snapshots, not transmittable at the block level, and has embryonic RAID support that's definitely not simple or useful in practice. Ext? Doesn't even have snapshots nor embryonic RAID nor dynamic volumes.

      Let me give a simple example: I have a home server and 2 desktops. The home server acts as a backup for the desktops (and more) and is itself backed up primarily locally on cold storage. A deployment like this with NixOS, via org-mode, on a ZFS root is something that can be done at the home level. ZnapZend sends daily snapshots to the home server, which is backed up manually every day simply by physically connecting the cold storage and disconnecting it when it's done (script). That's the foundation.

      What happens if I accidentally deleted a file I want back? Well, locally I recover it on the fly by going to $volRoot/.zfs/snapshots/... I can even diff them with Meld if needed. What happens if the single NVMe in the laptop dies?

      - I physically change the NVMe on my desk, connect and boot the laptop with a live system on a USB NVMe that boots with sshd active, a known user with authorized keys saved (creating it with NixOS is one config and one command; I update it monthly on the home server, but everything needed is anyway in an org-mode file)

      - From there, via ssh from the desktop, with one command (script) I create an empty pool and have mbuffer+zfs recv listening; the server via mbuffer+zfs send, send the latest snapshots of everything (data and OS)

      - When it's done, chroot, rebuild the OS to update the bootloader, reboot by disconnecting the USB NVMe, and I'm operational as before

      - what if one of mirrored two NVMEs of my desktop die? I change the faulted and simply wait for resilvering.

      Human restore time: ~5 minutes. Machine time: ~40 minutes. EVERYTHING is exactly as before the disk failed; I have nothing to do manually. Same for every other machine in my infra. Cost of all this? Maintaining some org-mode notes with the Nix code inside + machine time for automated ISO creation, backup incremental updates etc.

      Doing this with mainstream distros or legacy filesystems? Unfeasible. Just the mere logical backup without snapshots or via LVM snaps takes a huge amount of time; backing up the OS becomes unthinkable, and so on. That's the point.

      Most people have never built an infra like this; they've spent HOURS working in the shell to build their fragile home infra, when something breaks they spend hours manually fixing it. They think this is normal because they don't know anything else to compare. They think a setup like the one described is beyond home reach, but it's not. That's why classic filesystems, from ext to xfs (which does have snapshots) passing through reiserfs, btrfs, bcachefs and so on, make no sense in 2026 and not even in 2016.

      They are software written even in recent times, but born and stuck in a past era.

      2 replies →

  • I'm currently troubleshooting an issue on my Proxmox server with very slow read speeds from a ZFS volume on an NVMe disk. The disk shows ~7GBps reads outside of ZFS, but ~10MBps in a VM using the ZFS volume.

    I've read other reports of this issue. It might be due to fragmentation, or misconfiguration, or who knows, really... The general consensus seems to be that performance degrades after ~80% utilization, and there are no sane defragmentation tools(!).

    On my NAS, I've been using ext4 with SnapRAID and mergerfs for years without issues. Being able to use disparate drives and easily expand the array is flexible and cost effective, whereas ZFS makes this very difficult and expensive.

    So, thanks, but no thanks. For personal use I'll keep using systems that are not black boxes, are reliable, and performant for anything I'd ever need. What ZFS offers is powerful, but it also has significant downsides that are not worth it to me.