Comment by washingupliquid
17 hours ago
Maybe this is the kick in the ass Debian needs to upgrade the embarrassingly ancient dnsmasq in "stable" because while I can't think of any new features, the latest versions contain many non-CVE bug fixes.
But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.
Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.
They're not going to put a newer version in stable. The way stable gets newer versions of things is that you get the newer version into testing and then every two years testing becomes stable and stable becomes oldstable, at which point the newer version from testing becomes the version in stable.
The thing to complain about is if the version in testing is ancient.
Looks like the version in stable is 2.91, which was released within a couple months of trixie. It's not 'ancient' by any stretch.
FWIW the fixes referenced here are already fixed in trixie: https://security-tracker.debian.org/tracker/source-package/d...
Yeah was about to comment, parent says "if it is ancient", it is not. So the root comment is nothing burger. Stable has 1 release cycle old, and depending on how things play out, testing may have 2.93 or later anyways.
1 reply →
No, that's exactly the thing to complain about.
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
30 replies →
> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C
The automatically tested Debian release is called Debian Testing. And it is stable enough.
Debian Stable is basically "we target particular release with our dependencies instead of requiring customer to update entire system together with our software". That model works just fine as long as you don't go too far back.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
Narrator: It turned out things were not getting worse, they were just fine.
> We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That project is RedHat, not Debian, they backport entire features back to old versions (together with bugs!)
If you want that, you don't want Debian. Other people do.
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
4 replies →
Refactoring and rewrites prove time and time again that they also introduce new bugs and changes in behaviour that users of stable releases do not want.
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
How do you do QA without locking a set of features?
You have far too much faith in automated testing.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
1 reply →
Close: New versions go in unstable where development happens, testing is where things go to marinate for a while.
You don't have to use Debian stable, if you'd prefer Ubuntu every 6 months, or Fedora (6 months? 9 months?), or even Arch Linux updated daily ...
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
For what it's worth, Debian had a security update for dnsmasq yesterday, presumably to address this.
I dunno, 2.92 seems to bring in some new features and changes that would not typically be brought into a stable release: https://thekelleys.org.uk/dnsmasq/CHANGELOG
You can always ask the Debian project for your money back.
About a decade ago I switched to Ubuntu LTS because of Debian’s “policy?” of having pretty old packages in “stable” and a long release cycles.
Nowadays, even with Ubuntu’s two year or so release cycle I have to use 3rd party packages to have up to date software (PHP being one) and not some version from three years ago.
We no longer live in a world (with few exceptions) where running a 3-5 year old distribution (still supported) makes sense.
It depends on how you look at it. I use Debian stable in the smallet possible configuration because it is, well, stable. A rock on which I put docker to run actually useful services, which are upaded the way I want.
If I was to run dnsmasq on Debian, it would be in a container. Since I run Pihole (in a container), it kinda is.
whatever you're on, stop, it's not making your brain any better
That's what stable is for though. Like, sure, stable's policy is ludicrous and you would have to be insane to run stable. But the remedy for that isn't to try to change Debian policy, it's to get people to stop running stable. Maybe once no-one uses it Debian will see sense.
What if the new release which contains the fixes has new dependencies and those also have new dependencies? I assume they have to Frankenstein packages sometimes to maintain the borders of the target app while still having major vulns patched right in stable.
fixed, fixed, fixed, fixed, fixed and fixed
> ...they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable"
Irrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.