Comment by TacticalCoder
4 days ago
So basically if you like to put SSDs on shelves (for offline backups), you should read them from scratch once in a while?
I rotate religiously my offline SSDs and HDDs (I store backups on both SSDs and HDDs): something like four at home (offline onsite) and two (one SSD, one HDD) in a safe at the bank (offline offsite).
Every week or so I rsync (a bit more advanced than rsync in that I wrap rsync in a script that detects potential bitrot using a combination of an rsync "dry-run" and known good cryptographic checksums before doing the actual rsync [1]) to the offline disks at home and then every month or so I rotate by swapping the SSD and HDD at the bank with those at home.
Maybe I should add to the process, for SSDs, once every six months:
... $ dd if=/dev/sda | xxhsum
I could easily automate that in my backup'ing script by adding a file lastknowddtoxxhash.txt containing the date of the last full dd to xxhsum, verifying that, and then asking, if a SSD is detected (I take it on a HDD it doesn't matter), if a full read to hash should be done.
Note that I'm already using random sampling on files containing checksums in their name, so I'm already verifying x% of the files anyway. So I'd probably be detecting a fading SSD quite easily.
Additionally I've also got a server with ZFS in mirroring so this, too, helps keep a good copy of the data.
FWIW I still have most of the personal files from my MS-DOS days so I must be doing something correctly when it comes to backing up data.
But yeah: adding a "dd to xxhsum" of the entire disks once every six months in my backup'ing script seems like a nice little addition. Heck, I may go hack that feature now.
[1] otherwise rsync shall happily trash good files with bitrotten ones
No comments yet
Contribute on Hacker News ↗