Comment by readyplayernull
3 years ago
A few months ago I was looking for an external backup drive and thought that SSD would be great because it's fast and shock resistant. Years ago I killed a Macbook Pro HD by throwing it on my bed from few inches high. Then I read a comment on Amazon about SSD losing information when unpowered for a long time. I couldn't find any quick confirmation in the product page, took me a few hours of research to find some paper about this phenomenon. If I remember correctly it takes a few weeks for the stored SSD to start losing its data. So I bought a mechanical HD.
Another tech tip is not buying 2 backup devices from the same batch or even the same model. Chances being these will fail in the same way.
To the last bit, I've seen this first hand. Had a whole RAID array of the infamous IBM DeathStar drives fail one after the other while we frantically copied data off.
Last time I ever had the same model drives in an array.
Heh, I remember in the early 1990s having a RAID array with a bunch of 4Gb IBM drives come up dead after a weekend powerdown for a physical move due to "stiction". I was on the phone with IBM, and they were telling me to physically bang the drives on the edge of desk to loosen them up. Didn't seem to be working, so their advice was "hit it harder!" When I protested, they said, "hey, it already doesn't work, what have you got to lose?" So I hit it harder. Eventually got enough drives to start up to get the array on line, and you better believe the first thing I did after that was create a fresh backup (not that we didn't have a recent backup anyway), and the 2nd thing I did was replace those drives, iirc, with Seagate Barracudas.
Ouch. I knew someone who claimed to have dealt with that or a similar effect after their cleaning person had pulled the plug on their servers by putting the drive in an oven while connected and heating it slowly.
Personally my most nailbiting period was when I got my first (20MB!) drive as a kid and it was too big an investment to replace even when it refused to spin up without me opening up the drive(!) and nudging the platter with my finger to help the motor spin it up... I backed everything up (on floppies), and stored everything important straight to floppies, but it was still more convenient to hold on to the HD for the next 6 months or so until I'd saved up enough to replace it...
It's remarkable what drives can survive if you're lucky... Also remarkable how quickly that luck can run out, though.
> "hey, it already doesn't work, what have you got to lose?"
This attitude has saved me more than once. Recognising when you can afford to do things that seems ridiculous helps surprisingly often.
When I was still relatively familiar with flash memory technologies (in particular NAND flash, the type used in SSDs and USB drives), the retention specs were something like 10 years at 20C after 100K cycles for SLC, and 5 years at 20C after 5-10K cycles for MLC. The more flash is worn, the leakier it becomes. I believe the "few weeks" number for modern TLC/QLC flash, but I suspect that is still after the specified endurance has been reached. In theory, if you only write to the flash once, then the retention should still be many decades.
Someone is trying to find out with an experiment, however: https://news.ycombinator.com/item?id=35382252
Indeed. The paper everyone gets the "flash loses its data in a few years" claim from wasn't dealing with consumer flash and consumer use patterns. Remember that having the drive powered up wouldn't stop that kind of degradation without explicitly reading and re-writing the data. Surely you have a file on an SSD somewhere that hasn't been re-written in several years, go check yourself whether it's still good.
Even the utter trash that is thumb drives and SD cards seem to hold data just fine for many years in actual use.
IIRC, the paper was explicitly about heavily used and abused storage.