← Back to context

Comment by adastra22

1 day ago

They’re idle most of the time. Poweted on 24/7 though, and maybe a few hundred megabytes written every day, plus a few dozen gigabytes now and then. Mostly long-term storage. SMART has too much noise; I wait for zfs to kick it out of the pool before changing. With triple redundancy, never got close to data loss.

To be clear, I should have said replacing 2-3 disks per year.

That seems awfully high no? I've been running a 5 disk raidz2 pool (3TB disks) and haven't replaced a single drive in the last 6ish years. It's composed of only used/decommissioned drives from ebay. The manufactured date stamp on most of them says 2014.

I did have a period where I thought drives were failing but further investigation revealed that ZFS just didn't like the drives spinning down for power save and would mark them as failed. I don't remember the parameter but essentially just forced the drives to spin 24/7 instead of spinning down when idle and it's been fine ever since. My health monitoring script scrubs the array weekly.

  • Drives I RMA have actual bad sectors. You have a good batch. These drives tend to either last 10+ years, or fail in 1-3 years, and there is a clear bimodal distribution. I think about half the drives in my array are original too.