← Back to context

Comment by speedgoose

3 days ago

I admire the courage to store data on refurbished Seagate hard drives. I prefer SSD storage with some backups using cloud cold storage, because I’m not the one replacing the failing hard drives.

I would also prefer having a large number of high capacity SSDs so I could replace my spinning hard drives.

But even the cheapest high capacity SSD deals are still a lot more expensive than hard drive array.

I’ll continue replacing failing hard drives for a few more years. For me that has meant zero replacements over a decade, though I planned for a 5% annual failure rate and have a spare drive in the case ready to go. I could replace a failed drive from the array in the time takes to shut down, swap a cable to the spare drive, and boot up again.

SSDs also need to be examined for power loss protection. The results with consumer drives are mixed and it’s hard to find good info about how common drives behave. Getting enterprise grade drives with guaranteed PLP from large on-onboard capacitors is ideal, but those are expensive. Spinning hard drives have the benefit of using their rotational inertia to power the drive long enough to finish outstanding writes.

  • This is going to be a huge anecdote but all the consumer SSD I've had has been dramatically less reliable than HDDs. I've gone through dozens of little SATA and M2 drives and almost every single one of them has failed when put into any kind of server workload. However most of the HDDs I have from the last 10 years are still going strong despite sitting in my NAS and spinning that entire time.

    After going deep on the spec sheets and realizing that all but the best consumer drives have miserably low DWPD numbers I switched to enterprise (U.2 style) two years ago. I slam them with logs, metrics data, backups, frequent writes and data transfers, and have had 0 failures.

    • What file system are you using? ZFS is written with rotation rust in mind and assumingely will kill non-enterprise ssd.

  • You can find cheap used enterprise SSDs on ebay. But the problem is that even the most power efficient enterprise SSD (SATA) idle at like 1w. And given the smaller capacities, you need many more to match a hard drive. In the end HDD might actually consume less power than an all flash array + controllers if you need a large capacity.

    • Used SSDs, especially enterprise ones, are a really bad idea unless you get some really old SLC parts. Flash wears out in a very obvious way that HDDs don't, and keep in mind that enterprise-rated SSDs are deliberately rated to sacrifice retention for endurance.

      3 replies →

  • Curious, what's the use case for wanting your data backed-up without fail? Is it personal archives or otherwise (business) archive related?

    Not to say you shouldn't backup your data, but personally I wouldn't be to affected if one of my personal drives errored out, especially if they contained unused personal files from 10+ years ago (legal/tax/financials are another matter).

    • Any data I created, paid to license, or put in significant work to gather has to be backed-up with 3-2-1 rule. Stuff I can download or otherwise obtain again is best effort but not mandatory backup.

      Mainly I don't want to lose anything that took work to make or get. Personal photos, videos, source code, documents, and correspondence are the highest priority.

RAID. Preferably RAID 6. Much, much better to build a system to survive failure than to prevent failure.

  • Don't RAID these days. Software won rather drastically, likely because CPUs are finally powerful enough to run all those calculations without much of a hassle.

    Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID.

    Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues.

    • > Don't RAID these days. Software won rather drastically

      RAID does not mean or imply hardware RAID controllers, which you seem to incorrectly assume.

      Software RAID is still 100% RAID.

      2 replies →

    • FYI XFS is not redundant, also RAID usually refers to software RAID these days.

      I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work.

      9 replies →

    • Software or hardware, it's still the same basic concept.

      Redundancy rather than individual reliability.

I have a dozen refurbished exos disk in my storage machine. Works super! SSD for bigger storage is simply too expensive

And I prefer to have a healthy bank account balance.

Storing 18TB (let alone with raid) on SSDs is something only those earning Silicon Valley tech wages can afford.

  • We bought a few Kioxia 30.72 TiB SSDs for a couple of thousand in a liquidation sale. Sadly, I don't work there any more or I could have looked it up. U.2 drives if I recall, so you do need either a PCIe card or the appropriate stuff on your motherboard but pretty damn nice drives.

  • Not really. I know that my sleep is worth more than the difference between HDD and SSD prices, and I know the difference between the failure rates and the headache caused by the RMA process, so I buy SSDs.

    In essence, what we together are saying is that people with super-sensitive sleep that are also easily upset, and that don't have ultra-high salaries, cannot really afford 18 TB of data (even though they can afford an HDD), and that's true.

    • Well, again, well done on being able to afford it. I have 24TB array on cheap second hand drives from CEX for about £100 each, using DrivePool - and guess what, if one of them dies I'll just buy another £100 second hand drive. But also guess what - in the 6 years I had this setup, all of these are still in good condition. Paying for SSDs upfront would have been a gigantic financial mistake(imho).

Might be a bit adventurous for primary storage (though with enough backup and redundancy, why not). But seems perfect for me for backup / cold storage.

Every drive is "used" the moment you turn it on.

  • There's a big difference between used as in I just bought this hard drive and have used it for a week in my home server, and used as in refurbished drive after years of hard labor in someone else's server farm

    • Enterprise drives are way different than anything consumer based. I wouldn't trust a consumer drive used for 2 years, but a true enteprise drive has like millions of hours left of it's life.

      Quote from Toshiba's paper on this. [1]

      Hard disk drives for enterprise server and storage usage (Enterprise Performance and Enterprise Capacity Drives) have MTTF of up to 2 million hours, at 5 years warranty, 24/7 operation. Operational temperature range is limited, as the temperature in datacenters is carefully controlled. These drives are rated for a workload of 550TB/year, which translates into a continuous data transfer rate of 17.5 Mbyte/s[3]. In contrast, desktop HDDs are designed for lower workloads and are not rated or qualified for 24/7 continuous operation.

      From Synology

      With support for 550 TB/year workloads1 and rated for a 2.5 million hours mean time to failure (MTTF), HAS5300 SAS drives are built to deliver consistent and class-leading performance in the most intense environments. Persistent write cache technology further helps ensure data integrity for your mission-critical applications.

      [1] https://toshiba.semicon-storage.com/content/dam/toshiba-ss-v...

      [2] https://www.synology.com/en-us/company/news/article/HAS5300/...

      3 replies →

    • Drive failure rate versus age is a U-shaped curve. I wouldn't distrust a used drive with healthy performance and SMART parameters.

      And you should use some form of redundancy/backups anyway. It's also a good idea to not use all disks from the same batch to avoid correlated failures.