Comment by bugfix
2 years ago
I had this exact experience with my workstation SSD (NTFS) after a short power loss while NPM was running. After I turned the computer back on, several files (package.json, package-lock.json and many others inside node_modules) had the correct size on disk but were filled with zeros.
I think the last time I had corrupted files after a power loss was in a FAT32 disk on Win98, but you'd usually get garbage data, not all zeros.
> but you'd usually get garbage data, not all zeros.
You are less likely to get garbage with an SSD in combination with a modern filesystem because of TRIM. Even if the SSD has not (yet) wiped the data, it knows that a block that is marked as unused can be retuned as a block of 0s without needing to check what is currently stored for that block.
Traditional drives had no such facility to have blocks marked as unused from their PoV, so they always performed the read and returned what they found which was most likely junk (old data from deleted files that would make sense in another context) though could also be a block of zeros (because that block hadn't been used since the drive had a full format or someone zeroed free-space).
They may be pointing to unallocated space which on a SSD running TRIM would return all zeros. NTFS is an extremely resilient yet boring filesystem, I cannot remember the last time I had to run chkdsk even after an improper shutdown.
As somebody who worked as a PC technician for a while until very recently, I've run chkdsk and had to repair errors on NTFS filesystems very, very, very often. It's almost an everyday thing. Anecdotal evidence is less than useful here.
So anecdotal evidence is not useful, as proven by your anecdotal evidence? :)
FWIW I've found NTFS and ext3/4 to be of similar reliability over the years, in general use and in the face of improper shutdown. Metadata journaling does a lot to preserve the filesystem in such circumstances. Most of the few significant problems I've had have been due to hardware issues, which few filesystems on their own will help you with.
It is worth noting that when you run tools like chkdsk or fsck, some of the issues reported and fixed are not data damaging, or structurally dangerous, or at least not immediately so. For instance free areas marked in such a way that makes them look used to the allocation algorithms.
2 replies →