Comment by somehnguy
1 day ago
That seems awfully high no? I've been running a 5 disk raidz2 pool (3TB disks) and haven't replaced a single drive in the last 6ish years. It's composed of only used/decommissioned drives from ebay. The manufactured date stamp on most of them says 2014.
I did have a period where I thought drives were failing but further investigation revealed that ZFS just didn't like the drives spinning down for power save and would mark them as failed. I don't remember the parameter but essentially just forced the drives to spin 24/7 instead of spinning down when idle and it's been fine ever since. My health monitoring script scrubs the array weekly.
Drives I RMA have actual bad sectors. You have a good batch. These drives tend to either last 10+ years, or fail in 1-3 years, and there is a clear bimodal distribution. I think about half the drives in my array are original too.