← Back to context

Comment by chias

9 hours ago

I would certainly feel that way about an ext2 system. But ext4 was released in 2006

There's actually no substantial difference: it's the very concept of a mere filesystem that's obsolete. What's needed to manage your data is:

- Lightweight/instant/accessible and transmittable (at block-level) snapshots, not just logical access

- Integrated management of the underlying hardware, meaning support for various RAID types and dynamic volumes

- Simplicity of management

ZFS offers this, btrfs doesn't (even with LUKS + LVM, nor stratis); it has cumbersome snapshots, not transmittable at the block level, and has embryonic RAID support that's definitely not simple or useful in practice. Ext? Doesn't even have snapshots nor embryonic RAID nor dynamic volumes.

Let me give a simple example: I have a home server and 2 desktops. The home server acts as a backup for the desktops (and more) and is itself backed up primarily locally on cold storage. A deployment like this with NixOS, via org-mode, on a ZFS root is something that can be done at the home level. ZnapZend sends daily snapshots to the home server, which is backed up manually every day simply by physically connecting the cold storage and disconnecting it when it's done (script). That's the foundation.

What happens if I accidentally deleted a file I want back? Well, locally I recover it on the fly by going to $volRoot/.zfs/snapshots/... I can even diff them with Meld if needed. What happens if the single NVMe in the laptop dies?

- I physically change the NVMe on my desk, connect and boot the laptop with a live system on a USB NVMe that boots with sshd active, a known user with authorized keys saved (creating it with NixOS is one config and one command; I update it monthly on the home server, but everything needed is anyway in an org-mode file)

- From there, via ssh from the desktop, with one command (script) I create an empty pool and have mbuffer+zfs recv listening; the server via mbuffer+zfs send, send the latest snapshots of everything (data and OS)

- When it's done, chroot, rebuild the OS to update the bootloader, reboot by disconnecting the USB NVMe, and I'm operational as before

- what if one of mirrored two NVMEs of my desktop die? I change the faulted and simply wait for resilvering.

Human restore time: ~5 minutes. Machine time: ~40 minutes. EVERYTHING is exactly as before the disk failed; I have nothing to do manually. Same for every other machine in my infra. Cost of all this? Maintaining some org-mode notes with the Nix code inside + machine time for automated ISO creation, backup incremental updates etc.

Doing this with mainstream distros or legacy filesystems? Unfeasible. Just the mere logical backup without snapshots or via LVM snaps takes a huge amount of time; backing up the OS becomes unthinkable, and so on. That's the point.

Most people have never built an infra like this; they've spent HOURS working in the shell to build their fragile home infra, when something breaks they spend hours manually fixing it. They think this is normal because they don't know anything else to compare. They think a setup like the one described is beyond home reach, but it's not. That's why classic filesystems, from ext to xfs (which does have snapshots) passing through reiserfs, btrfs, bcachefs and so on, make no sense in 2026 and not even in 2016.

They are software written even in recent times, but born and stuck in a past era.

  • Or you just fully embrace the thin client life and offload everything to the server. pxe boot with remotely mounted filesystems. local hard drives? who needs those?

  • The scenarios you mentioned are indeed nice use cases of ZFS, but other tools can do this too.

    I can make snapshots and recover files with SnapRAID or Kopia. In the case of a laptop system drive failure, I have scripts to quickly setup a new system, and restore data from backups. Sure, the new system won't be a bit-for-bit replica of the old one, and I'll have to manually tinker to get everything back in order, but these scenarios are so uncommon that I'm fine with this taking a bit more time and effort. I'd rather have that over relying on a complex filesystem whose performance degrades over time, and is difficult to work with and understand.

    You speak about ZFS as if it's a silver bullet, and everything else is inferior. The reality is that every technical decision has tradeoffs, and the right solution will depend on which tradeoffs make the most sense for any given situation.