Comment by lrvick
2 months ago
And then the vulnerable code will just move to shell execs in the main library that fire the next time you include the library in your project.
If you do not have time to review a library, then do not use it.
2 months ago
And then the vulnerable code will just move to shell execs in the main library that fire the next time you include the library in your project.
If you do not have time to review a library, then do not use it.
I partially agree, but that does mitigate it. The report says the attacker injected a `postinstall` script, which is common.
On the other hand, yes, an attack at code level, or a legit bug wouldn't be prevented.
I'm so sick of people saying this. If you use js for any non-tiny project, you'll have a bunch of packages. Due to how modules work in js, you'll have many, many sub dependencies.
Nobody has time to review every package they'll use, especially when not all sub dependencies have fully pinned versions.
If you have time to review every package, every time it updates, you might as well just write it yourself.
Yes, this is a problem, no reviewing every dependency is not the damn solution
I have built and shipped production web applications for many large orgs with millions of users. Used 1-2 libs tops that i reviewed myself.
Also now as someone that runs a security consulting firm, we absolutely have clients that review 100% of dependencies even when it is expensive.
Both are valid options.
Normalized negligence is still negligence.
Show them this Ken Thompson paper of 1984: "Reflections on Trusting Trust"
https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
And then hardware compromises…
I don't mean install anything. I mean, it's not a problem particular to the JS ecosystem.
I full source bootstrapped a Linux distro from hex0 all the way to nodejs binaries just to deal with trusting trust risks.
"just give up" is not a valid strategy.
https://codeberg.org/stagex/stagex
2 replies →