← Back to context

Comment by kkfx

14 hours ago

My issue with Guix coming from NixOS is the missing first-class zfs support for root, crypto included, RustDesk, few other common services who are hard to package.

Guix potential target IMVHO should be desktop power users, not HPC, NixOS while mostly developed for embedded systems (Anduril) or servers in general still take care of desktops, Guix apparently not and that's a big issue... Nowadays outside academia I doubt there are many GNU/Linux users who deploy on plain ext4...

For desktop usage, I would be absolutely shocked if ext4 isn't the most common filesystem by a pretty wide margin. Its the default on Ubuntu, Debian, and Mint. Those are the 3 leading desktop distros.

No one is going to write a blog post titled "Why I just used the default filesystem in the installer" but that is what most people do. Things like btrfs and zfs are useful, complicated technologies that are fun to write about, fun to read about, and fun to experiment with. I'd be careful about assuming that leads to more general use, though. Its a lot like Guix and NixOS, in fact. They get all the attention in a forum like this. Ubuntu is what gets all the people, though.

  • You have a statistical point of view that doesn't go into detail enough: yes, Debian, Ubuntu, Mint are mainstream distros and use ext by default. The vast majority of their users are also mainstream users and would never approach declarative distros, which are alien to them.

    Those who choose going declarative instead are people with operations knowledge, who understand the value of a system ready to be built, modified, and rebuilt with minimal effort thanks to the IaC built into the OS, who understand the value of their data and therefore babysit them properly. The average user of Debian, Ubuntu, Mint today doesn't even have a backup, uses someone else's cloud. If they run experiments, they waste storage with Docker, or use manually managed VPSs; they don't own a complete infrastructure, let alone a modern one.

    So thinking about them for Guix means never letting it take off, because those users will never be Guix users. ZFS is the opposite of complicated; it's what you need to live comfortably when you know how to use it, which unfortunately isn't mainstream, and declarative distros do the same.

    NixOS succeeds despite the indigestible Nix language because it offers what's needed to be comfortable to those who know. Guix remains niche not because of GNU philosophy but because it doesn't do the same, not offering what those coming from operations are looking for and they are the most potential realist target users.

Ext4 is still very popular as a solid, no frills filesystem. Btrfs is the primary alternative and still suffers from a poor reputation from their years of filesystem corruption bugs and hard to diagnose errors. ZFS and XFS only makes sense for beefier servers and all other filesystems have niche use cases or are still under development.

  • I don't consider myself a "believer" in anything, but as a sysadmin, if I see a deploy with ext4, I classify it as a newbie's choice or someone stuck in the 80s. It's not a matter of conviction; it's simply about managing your data:

    - Transferable snapshots (zfs send) mean very low-cost backups and restores, and serious desktop users don't want to be down for half a day because a disk failed.

    - A pool means effective low-cost RAID, and anyone in 2026 who isn't looking for at least a mirror for their desktop either doesn't care about their data or lacks the expertise to understand its purpose.

    ZFS is the first real progress in storage since the 80s. It's the most natural choice for anyone who wants to manage their digital information. Unfortunately, many in the GNU/Linux world are stuck in another era and don't understand it. They are mostly developers whose data is on someone else's cloud, not on their own hardware. If they do personal backups, they do them halfway, without a proven restore strategy. They are average users, even if more skilled than average, who don't believe in disk failures or bit rot because they haven't experienced it personally, or if they have, they haven't stopped to think about the incident.

    If you want to try out services and keep your desktop clean, you need a small, backup-able volume that can be sent to other machines eg. a home server, to be discarded once testing is done. If you want to efficiently manage storage because when something breaks, you don't want to spend a day manually reinstalling the OS and copying files by hand, you'll want ZFS with appropriate snapshots, whether managed with ZnapZend or something else doesn't really matter.

    Unfortunately, those without operations experience don't care, don't understand. The possibility of their computer breaking isn't something they consider because in their experience it hasn't happened yet, or it's an exceptional event as exceptional that doesn't need automation. The idea of having an OS installed for 10 years, always clean, because every rebuild is a fresh-install and storage is managed complementarily, is alien to them. But the reality is that it's possible, and those who still understand operations really value it.

    Those who don't understand it will hardly choose Guix or NixOS; they are people who play with Docker, sticking to "mainstream" distros like Fedora, Ubuntu, Mint, Arch. Those who choose declarative distros truly want to configure their infrastructure in text, IaC built-in into the OS, and truly have resilience, so their infrastructure must be able to resurrect from its configuration plus backups quickly and with minimal effort, because when something goes wrong, I have other things to think about than playing with the FLOSS toy of the moment.

    • I'll bite. I use NixOS as a daily driver and IMO makes the underlying FS type even less important. If my main drive goes I can bootstrap a new one by cloning my repo and running some commands. For my data, I just have some rsyc scripts that sling the bits to various locations.

      I suppose if I really wanted to I could put the data on different partitions and disks and use the native fs tools but it's a level of detail that doesn't seem to matter that much relative to what I currently have. I could see thinking about FS details much more for a dedicated storage server

      Fs level backups for an OS sounds more relevant when the OS setup is not reproducable and would be a pain to recreate.

      1 reply →

    • I'm currently troubleshooting an issue on my Proxmox server with very slow read speeds from a ZFS volume on an NVMe disk. The disk shows ~7GBps reads outside of ZFS, but ~10MBps in a VM using the ZFS volume.

      I've read other reports of this issue. It might be due to fragmentation, or misconfiguration, or who knows, really... The general consensus seems to be that performance degrades after ~80% utilization, and there are no sane defragmentation tools(!).

      On my NAS, I've been using ext4 with SnapRAID and mergerfs for years without issues. Being able to use disparate drives and easily expand the array is flexible and cost effective, whereas ZFS makes this very difficult and expensive.

      So, thanks, but no thanks. For personal use I'll keep using systems that are not black boxes, are reliable, and performant for anything I'd ever need. What ZFS offers is powerful, but it also has significant downsides that are not worth it to me.

See this config for an example guix config with zfs - https://codeberg.org/hako/Testament/