← Back to context

Comment by ryao

5 days ago

You can do this on ZFS today with `zpool create -o ashift=14 ...`.

Yeah I know, thanks. But ZFS still mostly requires drives with the same sizes. My main NAS is like that but I can't expand it even though I want to, with drives of different sizes I have lying around, and I am not keen on spending for new HDDs right now. So I thought I'll make a secondary NAS with bcachefs and all the spare drives I have.

As for ZFS, I'll be buying some extra drives later this year and will make use of direct_io so I can use another NVMe spare for faster access.

  • If you don’t care about redundancy, you could add all of them as top level vdevs and then ZFS will happily use all of the space on them until one fails. Performance should be great until there is a failure. Just have good backups.

    • Yep, that's sadly my current setup. Most of my data are not super critical.

      When I can spend some $3000 or so I'll absolutely buy several 20 TB drives and just nail the whole thing -- and will use ZFS -- but for now the several spare HDDs that I want dedicated to my data are set up exactly as you mentioned: root vdevs with no redundancy. ZFS is mostly handling it fine even though the drives have vastly different speeds (and one of them is actually an SSD).

      So yep ZFS can still do quite a lot, it's just still not flexible enough in a manner that f.ex. bcachefs is. But the latter is still missing important features so I am sticking with ZFS for a while still.