Comment by hansvm
5 years ago
Not really. If you have a file format that requires, e.g., changes to be done in two places then it's reasonable to write to one place, have the system shut down never having written to the second place, and now have a corrupt file.
The journal ensures (helps ensure?) that individual file operations either happen or don't and can improve write performance, but it can't possibly know that you need to write, e.g., two separate 20TB streams to have a non-corrupt file.
For a single file, I thought that write operations were committed when e.g. closing the file or doing fsync, but now I'm not sure. I wonder if the system is free to commit immediately after a write() ends.
Based on your scenario, if an application-level "change" involves updating 2 files, interpreting the update of only one file and not the other as a corruption, you're right that filesystem journaling wouldn't suffice. However, in that case it wouldn't be that a single file was corrupted.
Still, I wonder about the other case, about when the filesystem decides to commit.
The system is free to commit immediately after a write() -- fsync or closing the file simply guarantees commits.
>Based on your scenario, if an application-level "change" involves updating 2 files
It could be two parts of the same file too. E.g. if you're using a single file with something like recutils with a single file to implement double-entry accounting and only commit one entry. You'll at least be able to detect the corruption in that case (not that you can in general), but you won't be able to fix it using only the contents of the file.