← Back to context

Comment by ncruces

2 months ago

This is a culture issue with developers who find it OK to have hundreds of (transitive) dependencies, and then follow processes that, for all intents and purposes, blindly auto update them, thereby giving hundreds of third-parties access to their build (or worse) execution environments.

Adding friction to the sharing of code doesn't absolve developers from their decision to blindly trust a ridiculous amount of third-parties.

I find that the issue is much more often not updating dependencies often enough with known security holes, than updating too often and getting hit with a supply-chain malware attack.

  • There have been several recent supply chain attacks that show attackers are taking advantage of this (previously sensible) mentality. So it is time to pivot and come up with better solutions before it spirals out of control.

    • A model that Linux distros follow would work to an extent: you have developed of packages and separate maintainers who test and decide to include or exclude packages and versions of packages. Imagine a JS distro which includes the top 2000 most popular libraries that are all known to work with each other. Your project can pull in any of these and every package is cryptographically signed off on by both the developers and the maintainer.

      Vulnerabilities in Linux distro packages obviously happen. But a single developer cannot push code directly into for example Debian and compromise the world.

  • Not updating is the other side of the same problem: library owners feel it is ok to make frequent backwards-compatibility breaking changes, often ignoring semver conventions. So consumers of their libraries are left with the choice to pin old insecure versions or spend time rewriting their code (and often transitive dependency code too) to keep up.

    This is what happens when nobody pays for anything and nobody feels they have a duty to do good work for free.

    • >This is what happens when nobody pays for anything and nobody feels they have a duty to do good work for free.

      Weirdly, some of the worst CVE I can think of were with enterprize software.

      4 replies →

It's not unreasonable to trust large numbers of trustworthy dependency authors. What we lack are the institutions to establish trust reliably.

If packages had to be cryptographically signed by multiple verified authors from a per-organization whitelist in order to enter distribution, that would cut down on the SPOF issue where compromising a single dev is enough to publish multiple malware-infested packages.

  • "Find large numbers of trustworthy dependency authors in your neighborhood!"

    "Large numbers of trustworthy dependency authors in your town can't wait to show you their hottest code paths! Click here for educational livecoding sessions!"

    • I don't understand your critique.

      Establishing a false identity well enough to fool a FOSS author or organization is a lot of work. Even crafting a spear phishing email/text campaign doesn't compare to the effort you'd have to put in to fool a developer well enough to get offered publishing privileges.

      Of course it's possible, but so are beat-them-with-a-five-dollar-wrench attacks.

  • It IS unreasonable to trust individual humans across the globe in 100+ different jurisdictions pushing code that gets bundled into my application.

    How can you guarantee a long trusted developer doesn't have a gun pointed to their head by their authoritarian govt?

    In our B2B shop we recently implemented a process where developers cannot add packages from third party sources - only first party like meta, google, spring, etc are allowed. All other boilerplate must be written by developers, and on the rare occasion that a third party dependency is needed it's copied in source form, audited and re-hosted on our internal infrastructure with an internal name.

    To justify it to business folks, we presented a simple math where I added the man-hours required to plug the vulnerabilities with the recurring cost of devsecops consultants and found that it's cheaper to reduce development velocity by 20-25%.

    Also devsecops should never be offshored due to the scenario I presented in my second statement.

    • You've presented your argument as if rebutting mine, but to my mind you've reinforced my first paragraph:

      * You are trusting large numbers of trustworthy developers.

      * You have established a means of validating their trustworthiness: only trust reputable "first-party" code.

      I think what you're doing is a pretty good system. However, there are ways to include work by devs who lack "first-party" bona-fides, such as when they participate in group development where their contributions are consistently audited. Do you exclude packages published by the ASF because some contributions may originate from troublesome jurisdictions?

      In any case, it is not necessary to solve the traitorous author problem to address the attack vector right in front of us, which is compromised authors.

    • If someone is wondering how effective such an approach is going to be with npm, consider the following:

      If you add jest, the popular test runner by Meta, that's adding 300 packages to your dependency graph.

      And here we don't yet have a bundler, linter, code formatter, or even web framework.

      So good luck with minimizing those dependencies.

  • Problem is that beyond some threshold number of authors, the probability they're all trustworthy falls to zero.

    • It's true that smuggling multiple identities into the whitelist is one attack vector, and one reason why I said "cut down" rather than "eliminate". But that's not easy to do for most organizations.

      For what it's worth, back when I was active at the ASF we used to vote on releases — you needed at least 3 positive votes from a whitelist of approved voters to publish a release outside the org and there was a cultural expectation of review. (Dunno if things have changed.) It would have been very difficult to duplicate this NPM attack against the upstream ASF release distribution system.

> This is a culture issue with developers who find it OK to have hundreds of (transitive) dependencies, and then follow processes that, for all intents and purposes, blindly auto update them

I do not know about NPM. But in Rust this is common practice.

Very hard to avoid. The core of Rust is very thin, to get anything done typically involves dozens of crates, all pulled in at compile time from any old developer implicitly trusted.

  • The same is true for go and for java.

    • You can write entire applications in Go without resorting to any dependencies, the std lib is quite complete.

      Most projects will have a healthy 5-20 dependencies though, with very little nested modules.

Unfortunately that's almost the whole industry. Every software project I've seen has an uncountable amount of dependencies. No matter if npm, cargo, go packages, whatever you name.

  • Every place I ever worked at made sure to curate the dependencies for their main projects. Heck, in some cases that was even necessary for certifications. Web dev might be a wild west, but as soon as your software is installed on prem by hundreds or thousands of paying customers the stakes change.

>absolve developers

Doesn't this ultimately go all the way up to the top?

You have 2 devs: one who mostly writes their own code, only uses packages that are audited etc; the other uses packages willy nilly. Who do you think will be hired? Who do you think will be able to match the pace of development that management and executives demand?

Rather than adding friction there is something else that could benefit from having as little friction as sharing code: publishing audits/reviews.

Be that as it may, a system that can fail catastrophically will. Security shouldn't be left to choice.