Comment by zelly
6 years ago
He's not wrong. ext4 is actually maintained. This matters. ZFS hasn't kept up with SSDs. ZFS partitions are also almost impossible to resize, which is a huge deal in today's world of virtualized hardware.
Honestly Linus's attitude is refreshing. It's a sign that Linux hasn't yet become some stiff design-by-committee thing. One guy ranting still calls the shots. I love it. Protect this man at all costs.
> ZFS hasn't kept up with SSDs.
Pretty sure this is false. ZFS does support trim (FreeBSD had trim support for quite a while, but ZoL has it now as well), as well as supporting l2arc and zil/slog on ssd.
> ZFS partitions are also almost impossible to resize
You can grow zfs partitions just fine (and even online expand). You just can't shrink them.
> You just can't shrink them.
That's not even entirely true, though it requires shuffling around with multiple vdevs temporarily and doesn't presently support raidz. Also vdev removal is primarily made to support an accidental "oops, I added a disk I shouldn't have" rather than removing a long-lived device -- there's no technical restriction against the later case, though the redirect references could hamper performance.
The official stance has always been to send/receive to significantly change a pool's geometry where it isn't possible online.
yup, true enough. You can accomplish great things with a combination of zfs send/receive and time. ;)
I've just last week used btrfs shrink to upgrade to newer Fedora after making a minimal backup. Very useful on for my purposes... I don't plan to look at ZFS until it's in mainline kernel. Having any Linux install media usable as rescue disk is very handy.
> ZFS partitions are also almost impossible to resize
I'm not sure you've actually used ZFS very much as any way I can see you could be meaning this, it is actually pretty straightforward and simple to resize partitions with ZFS pools and volumes within ZFS pools.
For example, if you mean that you have a root zpool on a device using only half the device, you just have to resize the partition and then turn on `autoexpand` for the pool.
We are talking about something resembling adding an extra disk to raid5. Can be easily done in mdadm raid, and then you just need to resize lvm, or whatever you run over it. Can not be done in zfs, not in raid5/6 mode
You're confusing extending vdevs with extending pools and stripes.
It's kind of apples to oranges, really.
FreeNAS documentation[0] makes it pretty clear.
In ZFS, you cannot add devices to a vdev after it has been created -- however, you CAN add more vdevs to a pool.
So basically, your complaint is that ZFS wants to have stripes of vdevs and that instead of adding 1 drive to a 3 drive RAID5 to make a 4 drive RAID5, you have to add 3 drives to a RAIDZ1 for a 6 drive RAIDZ+0 that is equivalent to a RAID50 on a hardware controller.
Yes, it's more enterprisey, but it's not especially more difficult and the result is different and perhaps better depending on your use case.
"ZFS hasn't kept up with SSDs."
What does that mean?
ZFS (or at least ZoL) doesn't scale well to NVMEs:
https://github.com/zfsonlinux/zfs/issues/8381
Probably something about TRIM.
That they've never used SSDs for a ZIL or a zpool, I would wager.
He is wrong. He's focused on performance; people use ZFS for its features, not its performance.
At works we used ZFS w/ snapshots for a container build machine for performance reasons. We had some edge cases that made the Docker copy on write filesystem unsuitable.
zfsonlinux added support for TRIM last year. Are you referring to something else?
"last year" means it's in very few distributions at this time. Encryption is another feature that is technically supported, but just been added. When I built my NAS last year, I had to use dm-crypt because zfs didn't have it. Some features indeed lag pretty badly in zfs
Is the first party that is Oracle making any efforts to develop zfs further at this point? Is ZoL the primary development team at this time?
1 reply →