← Back to context

Comment by meindnoch

1 day ago

>I just made a 4 x 24 TB ZFS pool

How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)

That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.

For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.

There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.

ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.

  • There are a couple recent developments in ZFS dedup that help to partially mitigate the memory issue: fast dedup and the ability to use a special vdev to hold the dedup table if it spills out of RAM.

    But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.

    • Nice. I was hoping a vdev for the dedup table would come along. I've wanted to use optane for the dedup table and see how it performs.

    • I think the asterisk there is that the special vdev requires redundancy and becomes a mandatory part of your pool.

      Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?

      2 replies →

I think you should be fine with 64GB (4x16GB ECC), I have 8x10TB RAID-Z2 and it uses around 34GB.

Some myths never die, I guess..

  • That was never a myth, was it? It was just sound advice that was repeated without the information about which specific use cases it applied to.