Comment by plomme
4 days ago
Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.
4 days ago
Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.
The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it.
A cooldown is a good idea, though.
There's another variable, though, which is how valuable "engineering time now" is vs. "engineering time later."
Certainly, having a regular/automated update schedule may take less clock time in total (due to preserved knowledge etc.), and incur less long-term risk, than deferring updates until a giant, risky multi-version multi-dependency bump months or years down the road.
But if you have limited engineering resources (especially for a bootstrapped or cost-conscious company), or if the risks of outages now are much greater than the risks of outages later (say, once you're 5 years in and have much broader knowledge on your engineering team), then the calculus may very well shift towards freezing now, upgrading later.
And in a world where supply chain attacks will get far more subtle than Shai-Hulud, especially with AI-generated payloads that can evolve as worms spread to avoid detection, and may not require build-time scripting but defer their behavior to when called by your code - macro-level slowness isn't necessarily a bad thing.
(It should go without saying that if you choose to freeze things, you should subscribe to security notification services that can tell you when a security update does release for a core server-side library, particularly for things like SQL injection vulnerabilities, and that your team needs the discipline to prioritize these alerts.)
When you do regular updates, they are quick. But also, you can timebox it and then back out and plan a harder than expected update.
Also, it keeps you in touch with your deps so you can consider if it’s even worth it. My favorite update was removing the dep (or starting a plan to remove it because it was interfering with regular updates)
At the same time, unplanned engineering time is almost always more expensive than planned engineering time. I'd rather have some regular, expected, upgrade work, than all of a sudden having to scramble because I need something at a moment when I didn't plan for that.
> Upgrading gets harder the longer you put it off.
This is only true if you install dependencies that break backwards compatibility.
Personally, I avoid this as much as possible.
There is a Goldilocks effect. Dependency just came out a few minutes ago? There is no time for the community to catch the vulnerability, no real coverage from dependency scans, and it's a risk. Dependency came out a few months ago? It likely has a large number of known vulns
That is indeed what one should do IMO. We've known for a long time now in the ops world that keeping versions stable is a good way to reduce issues, and it seems to me that the same principle applies quite well to software dev. I've never found the "but then upgrading is more of a pain" argument to be persuasive, as it seems to be equally a pain to upgrade whether you do it once every six months or once every six years.
The 'pain' comes from breaking changes, at worst if you delay you're going to ingest the same quantity of changes, and at best you might skip some short-lived ideas.
> Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.
Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)
This works until you consider regular security vulnerability patching (which we have compliance/contractual obligations for).
This only makes sense for vulnerabilities that can actually be exploited in your particular use-case and configuration of the library. A lot of vulns might be just noise and not exploitable so no need to patch.
Yes and no.
Problem is code bases are continuously evolving. A safe decision now, might not be a safe decision in the future. It's very easy to accidentally introduce a new code path that does make you vulnerable.
Because updates don't just include new features but also bug and security fixes. As always, it probably depends on the context how relevant this is to you. I agree that cooldown is a good idea though.
> Because updates don't just include new features but also bug and security fixes.
This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases.
I maintain both commercial and open source libs. This is a non starter in both cases. It would easily double if not triple the workload.
For open source, well these are volunteer projects on my own time, you are always welcome to fork a given version and backport any fixes that land on main/master.
For commercial libs, our users are not willing to pay extra for this service, so we don't provide it. They would rather stay on an old version and update the entire code base at given intervals. Even when we do release patch versions, there is surprisingly little uptake.
Are you just referring to backporting?
Semver was invented to facilitate that. Only if everyone adhered to it.
3 replies →
IMO for “boring software” you usually want to be on the oldest supported main/minor version, keeping an eye on the newest point version. That will have all the security patches. But you don't need to take every bug fix blindly.
[dead]
For any update:
- it usually contains improvements to security
- except when it quietly introduces security defects which are discovered months later, often in a major rev bump
- but every once in a while it degrades security spectacularly and immediately, published as a minor rev
CI fights this. But that’s peanuts compared to feature branches and nothing compared to lack of a monolith.
We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.
I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.
Just make sure to update when new CVEs are revealed.
Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.
Because if you're too far behind, when you "need" takes days instead of hours.
Because AppSec requires us to adhere to strict vulnerability SLA guidelines and that's further reinforced by similar demands from our customers.