There is some nuance to this. Adding comments to the stated goal "Everyone who interacts with Debian source code (1) should be able to do so (2) entirely in git:
(1) should be able does not imply must, people are free to continue to use whatever tools they see fit
(2) Most of Debian work is of course already git-based, via Salsa [1], Debian's self-hosted GitLab instance. This is more about what is stored in git, how it relates to a source package (= what .debs are built from). For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases, rather than using upstream branches directly.
> For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases
I really wish all the various open source packaging systems would get rid of the concept of source tarballs to the extent possible, especially when those tarballs are not sourced directly from upstream. For example:
- Fedora has a “lookaside cache”, and packagers upload tarballs to it. In theory they come from git as indicated by the source rpm, but I don’t think anything verifies this.
- Python packages build a source tarball. In theory, the new best practice is for a GitHub action to build the package and for a complex mess to attest that really came from GitHub Actions.
- I’ve never made a Debian package, but AFAICT the maintainer kind of does whatever they want.
IMO this is all absurd. If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that. If it stores a copy of the source, that copy should be cryptographically traceable to the commit in question, which is straightforward: the commit hash is a hash over a bunch of data including the full source!
For lots of software projects, a release tarball is not just a gzipped repo checked out at a specific commit. So this would only work for some packages.
> If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that.
shoutout AUR, I’m trying arch for the first time (Omarchy) and wasn’t planning on using the AUR, but realized how useful it is when 3 of the tools I wanted to try were distributed differently. AUR made it insanely easy… (namely had issues with Obsidian and Google Antigravity)
The whole patch quilting thing is awful. Just keep the patches as commits. It won't "trick" me or anyone else, especially if you keep them in branches that denote "debian".
Please, please, stop the nonsense with the patch quilting -- it's really cumbersome, it adds unnecessary cognitive load, it raises the bar to contributions, it makes maintenance harder, and it adds _zero value_. Patch quilting is a lose-lose proposition.
Maintaining separate upstream sources and downstream patches does provide value. Maybe not to you, but it does.
For example, it's trivial from a web browser with a couple of clicks to go and find out all the downstream changes to a package. For example to see how glibc is currently customized in debian testing/unstable you can just navigate this webpage:
If everything gets merged in the same git tree it's way harder. Harder but doable with a rebase+force push workflow, which makes collaboration way harder. Just impossible with a merge workflow.
As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
> For example, it's trivial from a web browser with a couple of clicks to go and find out all the downstream changes to a package.
How is this not also true for Git? Just put all the Debian commits "on top" and use an appropriate naming convention for your branches and tags.
> If everything gets merged in the same git tree it's way harder.
Yes, so don't merge, just rebase.
> Harder but doable with a rebase+force push workflow, which makes collaboration way harder.
No force pushes, just use new branch/tag names for new releases.
> Just impossible with a merge workflow.
Not impossible but dumb. Don't use merge workflows!
> As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
Git with a suitable web front-end gives you exactly that.
> In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
`git format-patch` and related can do the moral equivalent.
> The whole patch quilting thing is awful. Just keep the patches as commits.
I'd say that `quilt` the utility is pretty much abandoned at this point. The name `quilt` remains in the format name, but otherwise is not relevant.
Nowadays people that maintain patches do it via `gbp-pq` (the "patch queue" subcommand of the badly named `git-buildpackage` software). `gbp-pq switch` reads the patches stored in `debian/patches/`, creates an ephemeral branch on top of the HEAD, and replays them there. Any change done to this branch (new commits, removed comments, amended commits) are transformed by `gbp-pq export` into a valid set of patches that replaces `debian/patches/`.
This mechanism introduces two extra commands (one to "enter" and one to "exit" the patch-applied view) but it allows Debian to easily maintain a mergeable Git repo with floating patches on top of the upstream sources. That's impossible to do with plain Git and needs extra tools or special workflows even outside of Debian.
What siblings say. What you want is `git rebase`, especially with the `--onto` and `--interactive` options. You might also want something like bisect-rebase.sh[0], though there are several other things like it now.
It's worth mentioning the quilting approach likely predates the advent of git by at least a decade.. I think compatibility with git has been available for a while now and I assume there was always something more pressing than migrating the base stack to git
https://wiki.debian.org/UsingQuilt but the short form is that you keep the original sources untouched, then as part of building the package, you apply everything in a `debian/patches` directory, do the build, and then revert them. Sort of an extreme version of "clearly labelled changes" - but tedious to work with since you need to apply, change and test, then stuff the changes back into diff form (the quilt tool uses a push/pop mechanism, so this isn't entirely mad.)
Now if a consequence of that could be that one (as an author of a piece of not-yet-debianized software) can have the possibility to decently build Debian packages out of their own repository and, once the package is qualified to be included in Debian, trivially get the publish process working, that would be a godsend.
At the moment, it is nothing but pain if one is not already accustomed and used to building Debian packages to even get a local build of a package working.
The problem is that "once the package is qualified to be included in Debian" is _mostly_ about "has the package metadata been filled in correctly" and the fact that all your build dependencies also need to be in Debian already.
If you want a "simple custom repository" you likely want to go in a different direction and explicitly do things that wouldn't be allowed in the official Debian repositories.
For example, dynamic linking is easy when you only support a single Debian release, or when the Debian build/pkg infrastructure handles this for you, but if you run a custom repository you either need a package for each Debian release you care about and have an understanding of things like `~deb13u1` to make sure your upgrade paths work correctly, or use static binaries (which is what I do for my custom repository).
Oh, yes. This seems like nothing short of necessary for the long term viability of the project. I really hope this effort succeeds, thank you to everyone pushing this!
I can't find it now but I recently saw a graph of new Debian Developers joining the project over time and it has sharply declined in recent years. I was on track to becoming a Debian Developer (attended a couple DebConfs, got some packages into the archive, became a Debian Maintainer) but I ultimately burned out in large part because of how painful Debian's tooling makes everything. Michael Stapelberg's post about leaving Debian really rings true: https://michael.stapelberg.ch/posts/2019-03-10-debian-windin...
Debian may still be "getting by" but if they don't make changes like this Git transition they will eventually stop getting by.
What I've always found off-putting about the Debian packaging system is that the source lives with the packaging. I find that I prefer Ports-like systems where the packaging specifies where to fetch the source from. I find that when the source is included with the packaging, it feels more unwieldy. It also makes updating the package clumsier, because the packager has to replace the embedded source, rather than just changing which source tarball is fetched in the build recipe.
Debian requires that packages be able to be built entirely offline.
> Debian guarantees every binary package can be built from the available source packages for licensing and security reasons. For example, if your build system downloaded dependencies from an external site, the owner of the project could release a new version of that dependency with a different license. An attacker could even serve a malicious version of the dependency when the request comes from Debian's build servers. [1]
So do Gentoo and Nix, yet they have packaging separate from the source code. The source is fetched, but the build is sandboxed from the network during the configure, build and install phases. So it's technically possible.
This doesn't say what you think it does. It says that every binary package should only depend on its declared source packages. It does not say that source packages must be constructed without an upstream connection.
What the OP was referring to, is that Debian's tooling stores the upstream code along with the debian build code. There is support tooling for downloading new upstream versions (uscan) and for incorporating the upstream changes into Debian's version control (uupdate) to manage this complexity, but it does mean that Debian effectively mirrors the upstream code twice: in its source management system (mostly salsa.debian.org nowadays), and in its archive, as Debian source archives.
All that is required for this to work (building offline) and be immune to all bad thing you wrote: package build part must contain checksum of source code archive and mirror that source code.
This is such a wonderful guarantee to offer to users. In most cases, I trust the Debian maintainers more than a trust the upstream devs (especially once you take into account supply chain attacks).
It's sad how much Linux stuff is moving away from apt to systems like snap and flatpak that ship directly from upstream.
> I find that when the source is included with the packaging, it feels more unwieldy.
On the other hand, it makes for a far easier life when bumping compile or run time dependency versions. There's only one single source of truth providing both the application and the packaging.
It's just the same with Docker and Helm charts. So many projects insist on keeping sometimes all of them in different repositories, making change proposals an utter PITA.
As a process of community transition the team is right to focus on the need for more communications and documentation around the shift to git across the ecosystem.
I see alot of value in how steam helped communicate which software was and wasn’t ready to run on their new gaming platform. Tools like verification ticks and defined statuses for packages are very useful to communicate progress and to motivate maintainers to upgrade. Consider designing a similar verifition approach that helps the community easily track progress and nudge slow players. If it’s all too technical the community can’t help move things along.
Correct me if I’m wrong but as I’m understanding it, the processes is well underway towards moving the core systems and libraries (or whatever it’s all called) across to the new way. But that there’s a massive job of extended libraries maintained by lots of other parties and this ecosystem of libraries have been using all manner of approaches, each of which has its drawbacks and the big goal here is to get all these maintainers onboard to switch over to the new git-based workflow that this transition team (and others) have been working hard to make logical and easy enough to implement.
Is that a fair general read of the situation? (I have further comments to make but wanted to check my basic assumptions first).
Forge != repository is a good design pattern. If the two are separate you can even use multiple forges per repository.
Perhaps you might host hardware designs or art assets that benefit from one kind of forge, alongside code that benefits from another? Or more simply use one forge for CI and another for code review.
Debian is kind of slow in adapting to the modern world.
I kind of appreciate that debian put FOSS at a core value
very early on; in fact, it was the first distribution I
used that forced me to learn the commandline. The xorg-server
or rather X11 server back then was not working so I only had
the commandline, and a lean debian handbook. I typed in the
commands and learned from that. Before this I had SUSE and it
had a much thicker book, with a fancypants GUI - and it was
utterly useless. But that was in 2005 or so.
Now, in 2025, I have not used debian or any debian based
distribution in a long time. I either compile from source
loosely inspired by LFS/BLFS; or I may use Manjaro typically
these days, simply because it is the closest to a modern
slackware variant (despite systemd; slackware I used for
a long time, but sadly it slowed down too much in the last
10 years, even with modern variants such as alienbob's
slackware variant - manjaro moves forward like 100x faster
and it also works at the same time, including when I want
to compile from source; for some reason, many older distributions
failed to adapt to the modern era. Systemd may be one barrier
here, but the issue is much more fundamental than that. For
instance, you have many more packages now, and many things
take longer to compile, e. g. LLVM and what not, which in turn
is needed for mesa, then we have cmake, meson/ninja and so
forth. A lot more software to handle nowadays).
> Debian is kind of slow in adapting to the modern world.
Yeah definitely. I guess this is a result of their weird idea that they have to own the entire world. Every bit of open source Linux software ever made must be in Debian.
If you have to upgrade the entire world it's going to take a while...
Most, because Debian is the only distro which strictly enforces their manpages and filesystem standards. And most source packages don't care much, resp. have other ideas
Lots. Because many upstream projects don't have their build system set up to work within a distribution (to get dependencies form the system and to install to standard places). All distros must patch things to get them to work.
Well, there are big differences in how aggressively things are patched. Arch Linux makes a point to strictly minimize patches and avoid them entirely whenever possible. That's a good thing, because otherwise, nonsense like the Xscreensaver situation ensues, where the original developers aggressively reject distro packages for mutilating their work and/or forcing old and buggy versions on unsuspecting users.
A fair few I expect, amongst actively developed apps/utils/libs. Away from sid (unstable) Debian packages are often a bit behind upstream but still supported, so security fixes are often back-ported if the upstream project isn't also maintaining older releases that happen to match the version(s) in testing/stable/oldstable.
The longer answer is that a lot of people already use Git for Debian version control, and the article expands on how this will be better-integrated in the future. But what goes into the archive (for building) is fundamentally just a source package with a version number. There's a changelog, but you're free to lie in it if you so wish.
I remember when a startup I used to work for made the transition from svn to git. They transitioned, then threw the guy who suggested the transition under the bus; he quit, and then the company collapsed. Lol!
I was hired at a small startup (~15 employees total) and one of the first things I did was to migrate their SVN repository to Git. Not too difficult, brought over the history and then had to write a bunch of tooling to handle the fact that not all of the source code was in one giant heirarchy anymore (since everything was microservices and self-contained libraries it made sense to split them out).
After I left that company I ended up at a larger company (~14k employees) in part because I'd worked on SVN-to-Git migrations before. Definitely a different beast, since there were a huge amount of workflows that needed changing, importing 10 years of SVN history (some of which used to be CVS history), pruning out VM images and ISOs that had been inadvertently added, rewriting tons of code in their Jenkins instance, etc.
All this on top of installing, configuring, and managing a geographically distributed internal Gitlab instance with multiple repositories in the tens or hundreds of gigabytes.
It was a heck of a ride and took years, but it was a lot of fun at the same time. Thankfully 'the guy who suggested the transition' was the CEO (in the first company) or CTO (in the second) nothing went wrong, no one got thrown under buses, and both companies are still doing a-okay (as far as source control goes).
The simple task of following a bug requires you to:
1. Send an empty email to a special address for the bug.
2. Wait 15-30 minutes for Debian's graylisting mail server to accept your email and reply with a confirmation email.
3. Reply to the confirmation email.
The last time I tried to follow a bug, I never got the confirmation email.
In practically every other bug tracker, following a bug is just pressing a button.
Like most of Debian's developer tooling, the bug tracker gets the job done (most of the time) but it's many times more inconvenient than it needs to be.
As someone who uses Debian and very occasionally interacts with the BTS, what I can say is this:
As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.
Again, because of the email-only design, one must memorise commands or reference a text file to take actions on bugs. This may be decent for power users but it’s a horrible UX for most people. I can only assume that there is some analogue to the `bugreport` command I don’t know of for maintainers that actually offers some amount of UI assistance. As a user, I have no idea how to close my own bugs, or even to know which bugs I’ve created, so the burden falls entirely on the package maintainers to do all the work of keeping the bug tracker tidy (something that developers famously love to do…).
The search/bug view also does not work particularly well in my experience. The way that bugs are organised is totally unintuitive if you don’t already understand how it works. Part of this is a more general issue for all distributions of “which package is actually responsible for this bug?”, but Debian BTS is uniquely bad in my experience. It shows a combination of status and priority states and uses confusing symbols like “(frowning face which HN does not allow)” and “=” and “i” where you have to look at the tooltip just to know what the fuck that means.
>Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!
Huh. I just learned to use quilt this year as part of learning debian packaging. I've started using it in some of my own forks so I could eventually, maybe, contribute back.
I guess the old quilt/etc recommendation in the debian build docs is part of the docs updates project needed that the linked page talks about.
There is some nuance to this. Adding comments to the stated goal "Everyone who interacts with Debian source code (1) should be able to do so (2) entirely in git:
(1) should be able does not imply must, people are free to continue to use whatever tools they see fit
(2) Most of Debian work is of course already git-based, via Salsa [1], Debian's self-hosted GitLab instance. This is more about what is stored in git, how it relates to a source package (= what .debs are built from). For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases, rather than using upstream branches directly.
[1]: https://salsa.debian.org
> For example, currently most Debian git repositories base their work in "pristine-tar" branches built from upstream tarball releases
I really wish all the various open source packaging systems would get rid of the concept of source tarballs to the extent possible, especially when those tarballs are not sourced directly from upstream. For example:
- Fedora has a “lookaside cache”, and packagers upload tarballs to it. In theory they come from git as indicated by the source rpm, but I don’t think anything verifies this.
- Python packages build a source tarball. In theory, the new best practice is for a GitHub action to build the package and for a complex mess to attest that really came from GitHub Actions.
- I’ve never made a Debian package, but AFAICT the maintainer kind of does whatever they want.
IMO this is all absurd. If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that. If it stores a copy of the source, that copy should be cryptographically traceable to the commit in question, which is straightforward: the commit hash is a hash over a bunch of data including the full source!
For lots of software projects, a release tarball is not just a gzipped repo checked out at a specific commit. So this would only work for some packages.
13 replies →
> If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that.
For Debian, that's what tag2upload is doing.
shoutout AUR, I’m trying arch for the first time (Omarchy) and wasn’t planning on using the AUR, but realized how useful it is when 3 of the tools I wanted to try were distributed differently. AUR made it insanely easy… (namely had issues with Obsidian and Google Antigravity)
If "whatever tools they see fit" means "patch quilting" then please no. Leave the stone age and enter the age of modern DVCS.
git can be seen as porcelain on top of patch quilting so it's not as much done âge as one might think
3 replies →
The whole patch quilting thing is awful. Just keep the patches as commits. It won't "trick" me or anyone else, especially if you keep them in branches that denote "debian".
Please, please, stop the nonsense with the patch quilting -- it's really cumbersome, it adds unnecessary cognitive load, it raises the bar to contributions, it makes maintenance harder, and it adds _zero value_. Patch quilting is a lose-lose proposition.
Maintaining separate upstream sources and downstream patches does provide value. Maybe not to you, but it does.
For example, it's trivial from a web browser with a couple of clicks to go and find out all the downstream changes to a package. For example to see how glibc is currently customized in debian testing/unstable you can just navigate this webpage:
https://sources.debian.org/src/glibc/2.42-6/debian/patches
If everything gets merged in the same git tree it's way harder. Harder but doable with a rebase+force push workflow, which makes collaboration way harder. Just impossible with a merge workflow.
As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
> For example, it's trivial from a web browser with a couple of clicks to go and find out all the downstream changes to a package.
How is this not also true for Git? Just put all the Debian commits "on top" and use an appropriate naming convention for your branches and tags.
> If everything gets merged in the same git tree it's way harder.
Yes, so don't merge, just rebase.
> Harder but doable with a rebase+force push workflow, which makes collaboration way harder.
No force pushes, just use new branch/tag names for new releases.
> Just impossible with a merge workflow.
Not impossible but dumb. Don't use merge workflows!
> As an upstream maintainer of several project, being able to tell at a glance and with a few clicks how one of my projects is patched in a distribution is immensely useful when bug reports are opened.
Git with a suitable web front-end gives you exactly that.
> In a past job it also literally saved a ton of money because we could show legal how various upstreams were customized by providing the content of a few .debian.tar.gz tarballs with a few small, detached patches that could be analyzed, instead of massive upstream trees that would take orders of magnitude more time to go through.
`git format-patch` and related can do the moral equivalent.
Moving from a patch stack maintained by quilt to git is what this article is about.
> The whole patch quilting thing is awful. Just keep the patches as commits.
I'd say that `quilt` the utility is pretty much abandoned at this point. The name `quilt` remains in the format name, but otherwise is not relevant.
Nowadays people that maintain patches do it via `gbp-pq` (the "patch queue" subcommand of the badly named `git-buildpackage` software). `gbp-pq switch` reads the patches stored in `debian/patches/`, creates an ephemeral branch on top of the HEAD, and replays them there. Any change done to this branch (new commits, removed comments, amended commits) are transformed by `gbp-pq export` into a valid set of patches that replaces `debian/patches/`.
This mechanism introduces two extra commands (one to "enter" and one to "exit" the patch-applied view) but it allows Debian to easily maintain a mergeable Git repo with floating patches on top of the upstream sources. That's impossible to do with plain Git and needs extra tools or special workflows even outside of Debian.
> That's impossible to do with plain Git and needs extra tools or special workflows even outside of Debian
Rebase.
4 replies →
What siblings say. What you want is `git rebase`, especially with the `--onto` and `--interactive` options. You might also want something like bisect-rebase.sh[0], though there are several other things like it now.
[0] https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
7 replies →
That’s what git-rebase is for, and it is built into standard git.
It's worth mentioning the quilting approach likely predates the advent of git by at least a decade.. I think compatibility with git has been available for a while now and I assume there was always something more pressing than migrating the base stack to git
dgit handles the whole affair with very little fuss I've found and is quite a pleasant workflow.
3 replies →
> 4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.
What is patch quilting, for the blissfully unaware?
https://wiki.debian.org/UsingQuilt but the short form is that you keep the original sources untouched, then as part of building the package, you apply everything in a `debian/patches` directory, do the build, and then revert them. Sort of an extreme version of "clearly labelled changes" - but tedious to work with since you need to apply, change and test, then stuff the changes back into diff form (the quilt tool uses a push/pop mechanism, so this isn't entirely mad.)
19 replies →
Now if a consequence of that could be that one (as an author of a piece of not-yet-debianized software) can have the possibility to decently build Debian packages out of their own repository and, once the package is qualified to be included in Debian, trivially get the publish process working, that would be a godsend.
At the moment, it is nothing but pain if one is not already accustomed and used to building Debian packages to even get a local build of a package working.
The problem is that "once the package is qualified to be included in Debian" is _mostly_ about "has the package metadata been filled in correctly" and the fact that all your build dependencies also need to be in Debian already.
If you want a "simple custom repository" you likely want to go in a different direction and explicitly do things that wouldn't be allowed in the official Debian repositories.
For example, dynamic linking is easy when you only support a single Debian release, or when the Debian build/pkg infrastructure handles this for you, but if you run a custom repository you either need a package for each Debian release you care about and have an understanding of things like `~deb13u1` to make sure your upgrade paths work correctly, or use static binaries (which is what I do for my custom repository).
They could take a look at how pkgsrc [1] works.
[1] https://www.pkgsrc.org/
pkgsrc is great, I use this on smartos (as just an end user) and it’s extremely straightforward
Oh, yes. This seems like nothing short of necessary for the long term viability of the project. I really hope this effort succeeds, thank you to everyone pushing this!
You might think, but here we are at the end of 2025 and this is still a WIP.
I don’t think it’s a bad move, but it also seems like they were getting by with patches and tarballs.
I can't find it now but I recently saw a graph of new Debian Developers joining the project over time and it has sharply declined in recent years. I was on track to becoming a Debian Developer (attended a couple DebConfs, got some packages into the archive, became a Debian Maintainer) but I ultimately burned out in large part because of how painful Debian's tooling makes everything. Michael Stapelberg's post about leaving Debian really rings true: https://michael.stapelberg.ch/posts/2019-03-10-debian-windin...
Debian may still be "getting by" but if they don't make changes like this Git transition they will eventually stop getting by.
[dead]
What I've always found off-putting about the Debian packaging system is that the source lives with the packaging. I find that I prefer Ports-like systems where the packaging specifies where to fetch the source from. I find that when the source is included with the packaging, it feels more unwieldy. It also makes updating the package clumsier, because the packager has to replace the embedded source, rather than just changing which source tarball is fetched in the build recipe.
Debian requires that packages be able to be built entirely offline.
> Debian guarantees every binary package can be built from the available source packages for licensing and security reasons. For example, if your build system downloaded dependencies from an external site, the owner of the project could release a new version of that dependency with a different license. An attacker could even serve a malicious version of the dependency when the request comes from Debian's build servers. [1]
[1] https://wiki.debian.org/UpstreamGuide#:~:text=make%20V=1-,Su...
So do Gentoo and Nix, yet they have packaging separate from the source code. The source is fetched, but the build is sandboxed from the network during the configure, build and install phases. So it's technically possible.
11 replies →
This doesn't say what you think it does. It says that every binary package should only depend on its declared source packages. It does not say that source packages must be constructed without an upstream connection.
What the OP was referring to, is that Debian's tooling stores the upstream code along with the debian build code. There is support tooling for downloading new upstream versions (uscan) and for incorporating the upstream changes into Debian's version control (uupdate) to manage this complexity, but it does mean that Debian effectively mirrors the upstream code twice: in its source management system (mostly salsa.debian.org nowadays), and in its archive, as Debian source archives.
All that is required for this to work (building offline) and be immune to all bad thing you wrote: package build part must contain checksum of source code archive and mirror that source code.
This is such a wonderful guarantee to offer to users. In most cases, I trust the Debian maintainers more than a trust the upstream devs (especially once you take into account supply chain attacks).
It's sad how much Linux stuff is moving away from apt to systems like snap and flatpak that ship directly from upstream.
> What I've always found off-putting about the Debian packaging system is that the source lives with the packaging.
Many packages have stopped shipping the whole source and just keep the debian directory in Git.
Notable examples are
- gcc-*
- openjdk-*
- llvm-toolchain-*
and many more.
But isn't that incompatible with the proposed transition to Git?
It made a lot of sense before centralized source storage (Debian packaging predates Sourceforge, let alone github).
But it's still nice to have when an upstream source goes dark unexpectedly, as does occasionally still happen.
> I find that when the source is included with the packaging, it feels more unwieldy.
On the other hand, it makes for a far easier life when bumping compile or run time dependency versions. There's only one single source of truth providing both the application and the packaging.
It's just the same with Docker and Helm charts. So many projects insist on keeping sometimes all of them in different repositories, making change proposals an utter PITA.
As a process of community transition the team is right to focus on the need for more communications and documentation around the shift to git across the ecosystem.
I see alot of value in how steam helped communicate which software was and wasn’t ready to run on their new gaming platform. Tools like verification ticks and defined statuses for packages are very useful to communicate progress and to motivate maintainers to upgrade. Consider designing a similar verifition approach that helps the community easily track progress and nudge slow players. If it’s all too technical the community can’t help move things along.
https://www.steamdeck.com/en/verified
Correct me if I’m wrong but as I’m understanding it, the processes is well underway towards moving the core systems and libraries (or whatever it’s all called) across to the new way. But that there’s a massive job of extended libraries maintained by lots of other parties and this ecosystem of libraries have been using all manner of approaches, each of which has its drawbacks and the big goal here is to get all these maintainers onboard to switch over to the new git-based workflow that this transition team (and others) have been working hard to make logical and easy enough to implement.
Is that a fair general read of the situation? (I have further comments to make but wanted to check my basic assumptions first).
Forge != repository is a good design pattern. If the two are separate you can even use multiple forges per repository.
Perhaps you might host hardware designs or art assets that benefit from one kind of forge, alongside code that benefits from another? Or more simply use one forge for CI and another for code review.
https://archive.ph/vp6rp
Debian is kind of slow in adapting to the modern world.
I kind of appreciate that debian put FOSS at a core value very early on; in fact, it was the first distribution I used that forced me to learn the commandline. The xorg-server or rather X11 server back then was not working so I only had the commandline, and a lean debian handbook. I typed in the commands and learned from that. Before this I had SUSE and it had a much thicker book, with a fancypants GUI - and it was utterly useless. But that was in 2005 or so.
Now, in 2025, I have not used debian or any debian based distribution in a long time. I either compile from source loosely inspired by LFS/BLFS; or I may use Manjaro typically these days, simply because it is the closest to a modern slackware variant (despite systemd; slackware I used for a long time, but sadly it slowed down too much in the last 10 years, even with modern variants such as alienbob's slackware variant - manjaro moves forward like 100x faster and it also works at the same time, including when I want to compile from source; for some reason, many older distributions failed to adapt to the modern era. Systemd may be one barrier here, but the issue is much more fundamental than that. For instance, you have many more packages now, and many things take longer to compile, e. g. LLVM and what not, which in turn is needed for mesa, then we have cmake, meson/ninja and so forth. A lot more software to handle nowadays).
> Debian is kind of slow in adapting to the modern world.
Yeah definitely. I guess this is a result of their weird idea that they have to own the entire world. Every bit of open source Linux software ever made must be in Debian.
If you have to upgrade the entire world it's going to take a while...
> The canonical git format is “patches applied”.
How many Debian packages have patches applied to upstream?
Most, because Debian is the only distro which strictly enforces their manpages and filesystem standards. And most source packages don't care much, resp. have other ideas
Lots. Because many upstream projects don't have their build system set up to work within a distribution (to get dependencies form the system and to install to standard places). All distros must patch things to get them to work.
Well, there are big differences in how aggressively things are patched. Arch Linux makes a point to strictly minimize patches and avoid them entirely whenever possible. That's a good thing, because otherwise, nonsense like the Xscreensaver situation ensues, where the original developers aggressively reject distro packages for mutilating their work and/or forcing old and buggy versions on unsuspecting users.
6 replies →
A fair few I expect, amongst actively developed apps/utils/libs. Away from sid (unstable) Debian packages are often a bit behind upstream but still supported, so security fixes are often back-ported if the upstream project isn't also maintaining older releases that happen to match the version(s) in testing/stable/oldstable.
How do derivatives like Ubuntu build their packages compared to Debian?
I always thought that Debian is already on git, so this confused me. How is source control currently (or was) done with the Debian project?
The short answer is that it's not.
The longer answer is that a lot of people already use Git for Debian version control, and the article expands on how this will be better-integrated in the future. But what goes into the archive (for building) is fundamentally just a source package with a version number. There's a changelog, but you're free to lie in it if you so wish.
I remember when a startup I used to work for made the transition from svn to git. They transitioned, then threw the guy who suggested the transition under the bus; he quit, and then the company collapsed. Lol!
I was hired at a small startup (~15 employees total) and one of the first things I did was to migrate their SVN repository to Git. Not too difficult, brought over the history and then had to write a bunch of tooling to handle the fact that not all of the source code was in one giant heirarchy anymore (since everything was microservices and self-contained libraries it made sense to split them out).
After I left that company I ended up at a larger company (~14k employees) in part because I'd worked on SVN-to-Git migrations before. Definitely a different beast, since there were a huge amount of workflows that needed changing, importing 10 years of SVN history (some of which used to be CVS history), pruning out VM images and ISOs that had been inadvertently added, rewriting tons of code in their Jenkins instance, etc.
All this on top of installing, configuring, and managing a geographically distributed internal Gitlab instance with multiple repositories in the tens or hundreds of gigabytes.
It was a heck of a ride and took years, but it was a lot of fun at the same time. Thankfully 'the guy who suggested the transition' was the CEO (in the first company) or CTO (in the second) nothing went wrong, no one got thrown under buses, and both companies are still doing a-okay (as far as source control goes).
git is a skill check on learning tools to get a job done
git is actually a case-study in "Worse Is Better". hg beats it in every way, except pure speed. Of course, git is still way better than svn, tho.
2 replies →
This is great; I hate fighting distro source tools when I want to debug something.
This just adds a new tool though.
Obligatory XKCD reference: https://xkcd.com/927/
New tools during the transition, hopefully fewer tools in the long run. Also things making a lot more sense in the long run.
I wish Debian would also transition to a modern bug tracker. Current one is very archaic.
It surely won't win any beauty contests, but do you think it's missing any needed functionality?
Sincere question. I haven't interacted with it much in ages.
The simple task of following a bug requires you to:
1. Send an empty email to a special address for the bug.
2. Wait 15-30 minutes for Debian's graylisting mail server to accept your email and reply with a confirmation email.
3. Reply to the confirmation email.
The last time I tried to follow a bug, I never got the confirmation email.
In practically every other bug tracker, following a bug is just pressing a button.
Like most of Debian's developer tooling, the bug tracker gets the job done (most of the time) but it's many times more inconvenient than it needs to be.
6 replies →
As someone who uses Debian and very occasionally interacts with the BTS, what I can say is this:
As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.
Again, because of the email-only design, one must memorise commands or reference a text file to take actions on bugs. This may be decent for power users but it’s a horrible UX for most people. I can only assume that there is some analogue to the `bugreport` command I don’t know of for maintainers that actually offers some amount of UI assistance. As a user, I have no idea how to close my own bugs, or even to know which bugs I’ve created, so the burden falls entirely on the package maintainers to do all the work of keeping the bug tracker tidy (something that developers famously love to do…).
The search/bug view also does not work particularly well in my experience. The way that bugs are organised is totally unintuitive if you don’t already understand how it works. Part of this is a more general issue for all distributions of “which package is actually responsible for this bug?”, but Debian BTS is uniquely bad in my experience. It shows a combination of status and priority states and uses confusing symbols like “(frowning face which HN does not allow)” and “=” and “i” where you have to look at the tooltip just to know what the fuck that means.
3 replies →
It's just annoyingly clunky to use any time I need to interact with it, versus modern bug trackers like GitLab's and etc.
Also, locally patching reportbug to support XDG base directory spec is a chore (since maintainers didn't accept the fix for it for years).
to be fair, it fits my exact needs. and without common javacript bloat.
so kudos to its authors
Ian Jackson (the author of this article) also wrote debbugs.
>Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!
Huh. I just learned to use quilt this year as part of learning debian packaging. I've started using it in some of my own forks so I could eventually, maybe, contribute back.
I guess the old quilt/etc recommendation in the debian build docs is part of the docs updates project needed that the linked page talks about.