← Back to context

Comment by YZF

6 hours ago

I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.

Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.

Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.

Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.

You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.

And yes, they still thought they were doing the right thing.

  • To be fair npm makes (made?) it weirdly hard to use lock files so a lot of people did that by mistake. And when you do use lock, it reinstalls every time so a retagged package can just silently update.

    • FYI a retagged package would result in a different SHA512 integrity sum and fail the installation process. It won't "just silently update".

      Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".

    • doesn't `npm ci` prevent that? it fails if something doesn't match the lockfile, and wipes node_modules before running

      this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm

      1 reply →

    • I can’t comment on the behavior of ancient npm versions, but with modern npm I would not even know how to skip using a lockfile.

      As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.

      Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.

      In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.

      Pnpm will also do that automatically if the CI environment variable is set.

      1 reply →

I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.

For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.

The important part is not the specific implementation, but the mindset behind it.

An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.

At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.

> Everyone seems to think they are doing the right thing

I like to think people would agree more on the appropriate method if they saw the risk as large enough.

If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.

  • > if they saw the risk as large enough.

    If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.

  • Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?

    • What part of "We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them" gave you that impression?

    • >running 5-year-old versions of stuff with tons of known security flaws

      No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.

> It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues

I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.

There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.