← Back to context

Comment by ckastner

2 days ago

There is some nuance to this. Adding comments to the stated goal "Everyone who interacts with Debian source code (1) should be able to do so (2) entirely in git:

(1) should be able does not imply must, people are free to continue to use whatever tools they see fit

(2) Most of Debian work is of course already git-based, via Salsa [1], Debian's self-hosted GitLab instance. This is more about what is stored in git, how it relates to a source package (= what .debs are built from). For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases, rather than using upstream branches directly.

[1]: https://salsa.debian.org

> For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases

I really wish all the various open source packaging systems would get rid of the concept of source tarballs to the extent possible, especially when those tarballs are not sourced directly from upstream. For example:

- Fedora has a “lookaside cache”, and packagers upload tarballs to it. In theory they come from git as indicated by the source rpm, but I don’t think anything verifies this.

- Python packages build a source tarball. In theory, the new best practice is for a GitHub action to build the package and for a complex mess to attest that really came from GitHub Actions.

- I’ve never made a Debian package, but AFAICT the maintainer kind of does whatever they want.

IMO this is all absurd. If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that. If it stores a copy of the source, that copy should be cryptographically traceable to the commit in question, which is straightforward: the commit hash is a hash over a bunch of data including the full source!

  • For lots of software projects, a release tarball is not just a gzipped repo checked out at a specific commit. So this would only work for some packages.

    • A simple version of this might be a repo with a single file of code in a language that needs compilation, versus, and the tarball with one compiled binary.

      Just having a deterministic binary can be non-trivial, let alone a way to confirm "this output came from that source" without recompiling everything again from scratch.

    • For most well designed projects, a source tarball can be generated cleanly from the source tree. Sure, the canonical build process goes (source tarball) -> artifact, but there’s an alternative build process (source tree) -> artifact that uses the source tarball as an intermediate.

      In Python, there is a somewhat clearly defined source tarball. uv build will happily built the source tarball and the wheel from the source tree, and uv build --from <appropriate parameter here> will build the wheel from the source tarball.

      And I think it’s disappointing that one uploads source tarballs and wheels to PyPI instead of uploading an attested source tree and having PyPI do the build, at least in simple cases.

      In traditional C projects, there’s often some script in the source tree that runs it into the source tarball tree (autogen.sh is pretty common). There is no fundamental reason that a package repository like Debian or Fedora’s couldn’t build from the source tree and even use properly pinned versions of autotools, etc. And it’s really disappointing that the closest widely used thing to a proper C/C++ hermetic build system is Dockerfile, and Dockerfile gets approximately none of the details right. Maybe Nix could do better? C and C++ really need something like Cargo.

      4 replies →

    • If it isn't at least a gzip of a subset of the files of a specific commit of a specific repo, someone's definition of "source" would appear to need work.

      6 replies →

  • > If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that.

    For Debian, that's what tag2upload is doing.

  • shoutout AUR, I’m trying arch for the first time (Omarchy) and wasn’t planning on using the AUR, but realized how useful it is when 3 of the tools I wanted to try were distributed differently. AUR made it insanely easy… (namely had issues with Obsidian and Google Antigravity)

If "whatever tools they see fit" means "patch quilting" then please no. Leave the stone age and enter the age of modern DVCS.

  • git can be seen as porcelain on top of patch quilting so it's not as much done âge as one might think

    • This is a misunderstanding of what Git does. Git is a Merkle hash tree, content-addressed, immutable/append-only filesystem, with commits as objects that bind a filesystem root by its hash. The diffs that make up a commit are not really its contents -- they are computed as needed. Now most of the time it's best to think of Git as a patch quilting porcelain, but it's really more than that, and while you can get very far with the patch quilting porcelain model, at some point you need to understand that it goes deeper.

      2 replies →