Comment by JoshTriplett
16 hours ago
> Arguably, principle of least surprise is very Apple.
Principle of least surprise is good engineering practice. The question is always whose surprise. Someone who expects tar to behave like other UNIX systems is going to be surprised by this. Someone who expects tar on Apple to have perfect fidelity would be surprised by not-this.
I increasingly feel like build systems should never be relying on any "native" utilities from the host system, and should instead be bringing them in via dependencies. You can't have this problem if your packaging system pulls in a specific portable `tar` library.
What should be really surprising for the users of UNIX-like operating systems is when they lose data because traditional UNIX utilities like cp, tar or cpio do not make complete copies of files, as one would expect from their description.
What is worse is that these utilities do not give any warnings when they do not make complete copies. For cp, the root cause is that it has bad default options, while for tar and cpio the standard file formats cannot store the metadata of modern file systems.
The various tar programs have their own different file format extensions to deal with modern file systems, which are guaranteed to work only when using the same tar program for both creation and extraction. The better tar programs implement both their own file format extensions and the file format extensions used by other popular tar programs.
The author of the TFA has used some obsolete tar program, which is the cause for the surprising behavior that was seen.
To avoid loss of data on Linux, I always use the PAX file format instead of tar or cpio, with the extensions implemented by "bsdtar --create --format=pax" from libarchive, and I always alias cp to '/bin/cp --no-dereference --recursive --one-file-system --preserve=all --strip-trailing-slashes --verbose --interactive', where cp has been built with extended attributes support.
> The question is always whose surprise.
I think that the surprise of more data than expected is more desirable than the surprise of data loss. So in this case, it seems like the safe choice.
Agreed. I usually hate on Apple, and its terribly ancient utilities and gratuitous incompatibility with modern Linux utilities, motivated by hatred of the GPL license.
But in this case, I think what it's doing is… basically fine? "Tar should faithfully reproduce the semantics of the source filesystem" is a perfectly reasonable starting point.
Ideally there would be a documented way to turn off the Apple-specific metadata with Apple's own tar, though.
From tar(1):
> Someone who expects tar to behave like other UNIX systems is going to be surprised by this
They shouldn’t. The GNU tar manual already shows this behavior. https://www.gnu.org/software/tar/manual/html_node/What-tar-D...:
Because the archive created by tar is capable of preserving file information and directory structure, tar is commonly used for performing full and incremental backups of disks”
And yes, that same page also says:
“You can create an archive on one system, transfer it to another system, and extract the contents there. This allows you to transport a group of files from one system to another.”
> You can't have this problem if your packaging system pulls in a specific portable `tar` library.
You can’t pull in specific portable stuff all the way down (not even when running in Docker or a VM), so that will decrease the risk, but it cannot completely remove it. As an example, I think GNU tar will happily include .DS_Store files in archives.
Apple is always surprised that non-Apple devices exist.
See: the permanent undismissable red icon to "finish setting up your Apple TV with your iPhone"
Apple can't control non-Apple devices. They can only control their own. So this makes perfect sense.
They could control their own Apple TVs to allow that dialogue to be dismissed via the TV controls.
4 replies →
> I increasingly feel like build systems should never be relying on any "native" utilities from the host system, and should instead be bringing them in via dependencies.
Well, you see, while this, frankly, applies not just to build systems but to most of software, the consensus in the community of distro-maintainers is that it's actually wrong: you should use your system's package manager, and tools it can install, and let it fiddle with the ambient environment and give you that delicious "path dependency". And if your distro's packaging environment doesn't allow to do the things you need (e.g. being able to install both mongodb 3.8 and mongodb 5.0, ideally at the same time, but okay, I can keep running apt remove/install over and over, but I do need to check if my app correctly handled the wire protocol changes), well, that's your problem for desiring strange things.
Nixos has a pretty solid solution to this issue: key your dependencies with checksums of the content. That way you get the best of both worlds: you always get the exact version you want, and you can share a copy of that exact version with other software that wants to use that exact version too!
Yeah, Nix-like distributions (e.g. guix, lix) do for Linux systems what some language package managers (e.g. cargo) do for individual projects.
So it sounds like you don’t get the exact version you want because metadata is thrown away.
It's a checksum not the content itself
Are the xattr / chattr / umask checksums rolled into the main data fork content or are they hashed separately (or not at all)?
IIRC Nix is checksummed in the hash of the source of the content, not the results.
1 reply →