Comment by darkamaul

4 days ago

The "use cooldown" [0] blog post looks particularly relevant today.

I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.

[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...

Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.

  • The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it.

    A cooldown is a good idea, though.

    • There's another variable, though, which is how valuable "engineering time now" is vs. "engineering time later."

      Certainly, having a regular/automated update schedule may take less clock time in total (due to preserved knowledge etc.), and incur less long-term risk, than deferring updates until a giant, risky multi-version multi-dependency bump months or years down the road.

      But if you have limited engineering resources (especially for a bootstrapped or cost-conscious company), or if the risks of outages now are much greater than the risks of outages later (say, once you're 5 years in and have much broader knowledge on your engineering team), then the calculus may very well shift towards freezing now, upgrading later.

      And in a world where supply chain attacks will get far more subtle than Shai-Hulud, especially with AI-generated payloads that can evolve as worms spread to avoid detection, and may not require build-time scripting but defer their behavior to when called by your code - macro-level slowness isn't necessarily a bad thing.

      (It should go without saying that if you choose to freeze things, you should subscribe to security notification services that can tell you when a security update does release for a core server-side library, particularly for things like SQL injection vulnerabilities, and that your team needs the discipline to prioritize these alerts.)

      2 replies →

    • > Upgrading gets harder the longer you put it off.

      This is only true if you install dependencies that break backwards compatibility.

      Personally, I avoid this as much as possible.

  • There is a Goldilocks effect. Dependency just came out a few minutes ago? There is no time for the community to catch the vulnerability, no real coverage from dependency scans, and it's a risk. Dependency came out a few months ago? It likely has a large number of known vulns

  • That is indeed what one should do IMO. We've known for a long time now in the ops world that keeping versions stable is a good way to reduce issues, and it seems to me that the same principle applies quite well to software dev. I've never found the "but then upgrading is more of a pain" argument to be persuasive, as it seems to be equally a pain to upgrade whether you do it once every six months or once every six years.

    • The 'pain' comes from breaking changes, at worst if you delay you're going to ingest the same quantity of changes, and at best you might skip some short-lived ideas.

  • > Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.

    Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)

  • This works until you consider regular security vulnerability patching (which we have compliance/contractual obligations for).

    • This only makes sense for vulnerabilities that can actually be exploited in your particular use-case and configuration of the library. A lot of vulns might be just noise and not exploitable so no need to patch.

      1 reply →

  • Because updates don't just include new features but also bug and security fixes. As always, it probably depends on the context how relevant this is to you. I agree that cooldown is a good idea though.

    • > Because updates don't just include new features but also bug and security fixes.

      This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases.

      6 replies →

    • IMO for “boring software” you usually want to be on the oldest supported main/minor version, keeping an eye on the newest point version. That will have all the security patches. But you don't need to take every bug fix blindly.

      1 reply →

    • For any update:

      - it usually contains improvements to security

      - except when it quietly introduces security defects which are discovered months later, often in a major rev bump

      - but every once in a while it degrades security spectacularly and immediately, published as a minor rev

  • CI fights this. But that’s peanuts compared to feature branches and nothing compared to lack of a monolith.

    We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.

    I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.

  • Just make sure to update when new CVEs are revealed.

    Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.

  • Because AppSec requires us to adhere to strict vulnerability SLA guidelines and that's further reinforced by similar demands from our customers.

But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started.

  • I don't think that this Kantian argument is relevant in tech. We've had LTS versions of software for decades and it's not like every single person in the industry is just waiting for code to hit LTS before trying it. There are a lot of people and (mostly smaller) companies who pride themselves on being close to the "bleeding edge", where they're participating more fully in discovering issues and steering the direction.

  • The assumption in the post is that scanners are effective at detecting attacks within the cooldown period, not that end-device exploitation is necessary for detection.

    (This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.)

  • To find a vulnerability, one does not necessarily deploy a vulnerable version to prod. It would be wise to run a separate CI job that tries to upgrade to the latest versions of everything, run tests, watch network traffic, and otherwise look for suspicions activity. This can be done relatively economically, and the responsibility could be reasonably distributed across the community of users.

  • It does scale against this form of attack. This attack propagates by injecting itself into the packages you host. If you pull only 7d after release you are infected 7d later. If your customers then also only pull 7d later they are pulling 14d after the attack has launched, giving defenders a much longer window by slowing down the propagation of the worm.

  • That worried me too, a sort of inverse tragedy of the commons. I'll use a weeklong cooldown, _someone else_ will find the issue...

    Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter.

    • That is statistically not possible, unless you are dealing with very small sample size.

      The "until no one does" is not something that can happen in something like npm ecosystem, or even among the specific user of "left-pad".

I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds?

I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.

  • There are companies like Helix Guard scanning registries. They advertise static analysis / LLM analysis, but honeypot instances can also install packages & detect certain files like cloud configs being accessed

  • Your line of reasoning only makes sense if literally almost all developers in the world adopt cooldowns, and adopt the same cooldown.

    That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine.

    • I don't think so. What fraction of developers would even notice the malware? Some malware seems to barely be noticed.

  • For zero/one days, the trick is that you'd pair dependency cooldowns with automatic scanning for vulnerable dependencies.

    And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place.

Pretty easy to do using npm-check-update:

https://www.npmjs.com/package/npm-check-updates#cooldown

In one command:

  npx npm-check-updates -c 7

  • The docs list this caveat:

    > Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period.

    Seems like a big drawback to this approach.

    • I could see it being a good feature. If there have been two versions published within the last week or two, then there are reasonable odds that the previous one had a bug.

      1 reply →