Comment by zrm

20 hours ago

> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time.

That's not what it's about.

What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.

> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.

They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.

But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.

You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...

So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.

And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.

So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.

  • > You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...

    Updated what, specifically in production?

    If you need a newer version of Python or Postgres or whatever it is possible to install it from third-party repos or compile from source yourself. But having a team of folks watch all the other code out there is a load off my plate: not worrying about libc, or OpenSSH, or OpenSSL, or zlib, or a thousand other dependencies. If I need the latest version for a particular service I would install that separately, but otherwise the whole point of a 'packagized' system is to let other folks worry about those things.

    > So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.

    I've done in-place upgrades of Debian from version 5 to 11 at my last job on many machines, never once re-installing from scratch, and they've all gone fine.

    Further, when updates come down from the Debian repos I don't worry about applying them because I know there's not going to be weird changes in behaviour: I'm more confident in deploying things like security updates because the new .deb files have very focused changes.

  • Get thru the issues once every 2 years is entirely fine. Farther than that and you get problems. We do that for ~500 systems of very varied use. I wouldn't want to do it yearly (or dread on rolling release) but I also wouldn't want to do it any less often coz of issues you mentioned.

    > And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.

    Having that sprung on you because you decided to run everything on latest is worse.

    "Oh we have CVE, we now need to uproot everything because new version that fixes it also changed shit"

    With release every year or two you can *plan* for it. You are not forced into it as with "rolling" releases because with rolling you NEED to take in new features together with bugfixes, but with Debian-like release cycle you can do it system by system when new version comes up and the "old" one still gets security fixes so you're not instantly screwed.

    > So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.

    It exists in that format because people are running businesses bigger than "a man with a webpage deployed off master every few days"

  • There are two different kinds of updates.

    One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.

    The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.

    You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.

    • > One is security updates and bug fixes.

      That's where you're wrong. They're not one and the same.

      Debian stable often defers non-security bug fixes for up to two years by playing this game.

      I'm not interested in new features unless they make things actually work.

      Debian stable time and again favors broken over new. Broken kernels, broken packages. At least they're stable in their brokenness.

      Hence my complaint.

      1 reply →

    • You definitely need different channels for high priority fixes and normal releases, stable and testing releases and all that.

      But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.

      But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".

      Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.

      9 replies →

  • If you don't like the debian model, didn't use debian. There are people that like the debian model, it seems like you aren't one of them, though. That doesn't make them wrong.

  • > You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...

    Doing terrible work every 2 years is better than doing it every day?

    • I've brought this up with leap second adjustments; a process you do once every two years is one you'll never get good at. If you want them to go smoothly, do them monthly.

      LetsEncrypt has been a great example of this in certificate management.

    • > Doing terrible work every 2 years is better than doing it every day?

      And by skipping some releases, you will have less of that work. When something is changed in one release, then changed again on the next one, by waiting you only have to do the change once, instead of twice. And sometimes you don't even have to do anything, when something is introduced in one release and reverted in the next one.

    • Personally I'd rather have a manageable stream of little bad things consistently over time rather than suddenly having a mountain of bad things one day.

      3 replies →

  • Clearly you disagree with the debian stable perspective. That's fine, it's not for everyone. You can just run debian unstable or debian testing, depending on where exactly you draw the line.

    If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.

    Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.

    From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.

    • ...or just leave grandparents on the previous version of Stable until they get a new computer. Honestly not a huge fan of upgrading software at all, if I'm the one supporting the machines.

      1 reply →

There are bleeding edge and rolling release distributions. Debian is simply not that and has no desire to be.

> You don't want that as an automatic update because it will break in production for anyone who is actually using it

The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.

It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.

> What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled.

Debian patches defaults in OpenSSH code so it behaves differently than upstream.

They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.

Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.

When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.

  • It's called open source. People are allowed to compile it as they wish. That's part of the positive, and doing so doesn't mean anything is broken.