← Back to context

Comment by swatcoder

7 days ago

The point is to apply a cooldown to your "dumb" and unaccountable automation, not to your own professional judgment as an engineer.

If there's a bugfix or security patch that applies to how your application uses the dependency, then you review the changes, manually update your version if you feel comfortable with those changes, and accept responsibility for the intervention if it turns out you made a mistake and rushed in some malicious code.

Meanwhile, most of the time, most changes pushed to dependencies are not even in the execution path of any given application that integration with them, and so don't need to be rushed in. And most others are "fixes" for issues that were apparently not presenting an eminent test failure or support crisis for your users and don't warrant being rushed in.

There's not really a downside here, for any software that's actually being actively maintained by a responsible engineer.

You're not thinking about the system dependencies.

> Meanwhile, most of the time, most changes pushed to dependencies are not even in the execution path of any given application that integration with them

Sorry, this is really ignorant. You don't appreciate how much churn their is in things like the kernel and glibc, even in stable branches.

  • > You're not thinking about the system dependencies.

    You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed.

    If there's a zero day vulnerability that affects your system, which is a rare but real thing, you can be notified and bypass a cooldown system.

    Otherwise, you've presumably either adapted your workflow to work around a bug or you never even recognized one was there. Either way, waiting an extra <cooldown> before applying a fix isn't going to harm you, but it will dampen the much more dramatic risk of instability and supply chain vulnerabilities associated with being on the bleeding edge.

    • > You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed.

      Well, I've made a whole career out of fixing bugs like that. Just because you don't see them doesn't mean they don't exist.

      It is shockingly common to see systems bugs that don't trigger for a long time by luck, and then suddenly trigger out of the blue everywhere at once. Typically it's caused by innocuous changes in unrelated code, which is what makes it so nefarious.

      The most recent example I can think of was an uninitialized variable in some kernel code: hundreds of devices ran that code reliably for a year, but an innocuous change in the userland application made the device crash on startup almost 100% of the time.

      The fix had been in stable for months, they just hadn't bothered to upgrade. If they had upgraded, they'd have never known the bug existed :)

      I can tell dozens of stories like that, which is why I feel so strongly about this.

      2 replies →