Comment by kelnos
2 months ago
As a user of npm-hosted packages in my own projects, I'm not really sure what to do to protect myself. It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies, and so on. Even if I had the time to do that, I'm not a typescript/javascript expert, and I'm certain there are a lot of obfuscated things that an attacker could do that I wouldn't realize was embedded malware.
One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies. The idea is that when I want to update my dependencies, instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago. As a maintainer, I could decide that a package that's been out in the wild for at least 6 weeks is less likely to have unnoticed malware in it than one that was released just yesterday.
Obviously this is not a perfect fix, as there's no guarantee that the delay time I specify is enough for any particular package. And I'd want the tool to present me with options sometimes: e.g. if my current version of a dep has a vulnerability, and the fix for it came out a few days ago, I might choose to update to it (better eliminate the known vulnerability than refuse to update for fear of an unknown one) rather than wait until it's older than my threshold.
> It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies
I think this is a good argument for reducing your dependency count as much as possible, and keeping them to well-known and trustworthy (security-wise) creators.
"Not-invented-here" syndrome is counterproductive if you can trust all authors, but in an uncontrolled or unaudited ecosystem it's actually pretty sensible.
Have we all forgotten the left-pad incident?
This is an eco system that has taken code reuse to the (unreasonable) extreme.
When JS was becoming popular, I’m pretty sure every dev cocked an eyebrow at the dependency system and wondered how it’d be attacked.
> This is an eco system that has taken code reuse to the (unreasonable) extreme.
Not even that actually. Actually the wheel is reinvented over and over again in this exact ecosystem. Many packages are low quality, and not even suitable to be reused much.
33 replies →
Not on HN, the land of "you should use a SaaS or PaaS for that (because I might eventually work there and make money)" or "I don't want to maintain that code because it's not strictly related to my CRUD app business! how you dare!"
1.2 million weekly downloads to this day, when we've had builtin padStart since ES2017.
Yes, I remember thinking at the time "how are people not ashamed to install this?"
I found it funny back when people were abandoning Java for JavaScript thinking that was better somehow...(especially in terms of security)
NPM is good for building your own stack but it's a bad idea (usually) to download the Internet. No dep system is 100% safe (including AI, generating new security vulns yay).
I'd like to think that we'll all stop grabbing code we don't understand and thrusting it into places we don't belong, or at least, do it more slowly, however, I also don't have much faith in the average (especially frontend web) dev. They are often the same idiots doing XYZ in the street.
I predict more hilarious (scary even) kerfuffles, probably even major militaries losing control of things ala Terminator style.
6 replies →
If it's not feasible to audit every single dependency, it's probably even less feasible to rewrite every single dependency from scratch. Avoiding that duplicated work is precisely why we import dependencies in the first place.
Most dependencies do much more than we need from them. Often it means we only need one or a few functions from them. This means one doesn't need to rewrite whole dependencies usually. Don't use dependencies for things you can trivially write yourself, and use them for cases where it would be too much work to write yourself.
22 replies →
It isn't feasible to audit every line of every dependency, just as it's not possible to audit the full behavior of every employee that works at your company.
In both cases, the solution is similar: try to restrict access to vital systems only to those you trust,so that you have less need to audit their every move.
Your system administrators can access the server room, but the on-site barista can't. Your HTTP server is trusted enough to run in prod, but a color-formatting library isn't.
3 replies →
This is true to the extent that you actually _use_ all of the features of a dependency.
You only need to rewrite what you use, which for many (probably most) libraries will be 1% or less of it
4 replies →
it's probably even less feasible to rewrite every single dependency from scratch.
When you code in a high-security environment, where bad code can cost the company millions of dollars in fines, somehow you find a way.
The sibling commenter is correct. You write what you can. You only import from trusted, vetted sources.
> If it's not feasible to audit every single dependency, it's probably even less feasible to rewrite every single dependency from scratch.
There is no need to rewrite dependencies. Sometimes it just so happens that a project can live without outputting fancy colorful text to stdout, or doesn't need to spread transitive dependencies on debug utilities. Perhaps these concerns should be a part of the standard library, perhaps these concerns are useless.
And don't get me started on bullshit polyfill packages. That's an attack vector waiting to be exploited.
Its much more feasible these days. These days for my personal projects I just have CC create only a plain html file with raw JS and script links.
Not sure I completely agree as you often use only a small part of a library
One interesting side effect of AI is that it makes it sometimes easy to just recreate the behavior, perhaps without even realizing it..
is it that infeasible with LLMs?
a lor of these dependencies are higher order function definitions, which never change, and could be copy/pasted around just fine. they're never gonna change
"rewrite every single dependency from scratch"
No need to. But also no need to pull in a dependency that could be just a few lines of own (LLM generated) code.
6 replies →
Sounds like the job for an LLM tool to extract what's actually used from appropriately-licensed OSS modules and paste directly into codebases.
9 replies →
>> and keeping them to well-known and trustworthy (security-wise) creators.
The true threat here isn't the immediate dependency though, it's the recursive supply chain of dependencies. "trustworthy" doesn't make any sese either when the root cause is almost always someone trustworthy getting phished. Finally if I'm not capable of auditing the dependencies it's unlikely I can replace them with my own code. That's like telling a vibe coder the solution to their brittle creations is to not use AI and write the code themselves.
> Finally if I'm not capable of auditing the dependencies it's unlikely I can replace them with my own code. That's like telling a vibe coder the solution to their brittle creations is to not use AI and write the code themselves.
In both cases, actually doing the work and writing a function instead of adding a dependency or asking an AI to write it for you will probably make you a better coder and one who is better able to audit code you want to blindly trust in the future.
1 reply →
"A little copying is better than a little dependency" -- Go proverb (also applies to other programming languages)
IMO, one thing I like in npm packages is that that usually they are small, and they should ideally converge towards stability (frozen)...
If they are not, something is bad and the dependency should be "reduced" if at all possible.
Exactly.
I always tried to keep the dependencies to a minimum.
Another thing you can do is lock versions to a year ago (this is what linux distros do) and wait for multiple audits of something, or lack of reports in the wild, before updating.
I saw one of those word-substition browser plugins a few years back that swapped "dependency" for "liability", and it was basically never wrong.
(Big fan of version pinning in basically every context, too)
1 reply →
> I think this is a good argument for reducing your dependency count as much as possible, and keeping them to well-known and trustworthy (security-wise) creators.
I wonder to which extent is the extreme dependency count a symptom of a standard library that is too minimalistic for the ecosystem's needs.
Perhaps this issue could be addressed by a "version set" approach to bundling stable npm packages.
I remember people in the JS crowd getting really mad at the implication that this all was pretty much inevitable, like 10/15 years ago. Can’t say they didn’t do great things since then, but it’s not like nobody saw this coming.
Easier said than done when your ecosystem of choice took the Unix philosophy of doing one thing well, misinterpreted it and then drove it off a cliff. The dependency tree of a simple Python service is incomparable to a Node service of similar complexity.
As a security guy, for years, you get laughed out of the room suggesting devs limit their dependencies and don't download half of the internet while building. You are an obstruction for making profit. And obviously reading the code does very little since modern (and especially Javascript) code just glues together frameworks and libraries, and there's no way a single human being is going to read a couple million lines of code.
There are no real solutions to the problem, except for reducing exposure somewhat by limiting yourself to a mostly frozen subset of packages that are hopefully vetted more stringently by more people.
The "solution" would be using a language with a strong standard library and then having a trusted 3rd party manually audit any approved packages.
THEN use artifactory on top of that.
That's boring and slow though. Whatever I want my packages and I want them now. Apart of the issue is the whole industry is built upon goodwill and hope.
Some 19 year old hacked together a new front end framework last week, better use it in prod because why not.
Occasionally I want to turn off my brain and just buy some shoes. The Timberland website made that nearly impossible last week. When I gave up on logging in for free shipping and just paid full price, I get an email a few days later saying they ran out of shoes.
Alright. I guess Amazon is dominant for a reason.
This is the right answer. I'm willing to stick my head out and assert that languages with a "minimal" standard library are defective by design. The argument of APIs being stuck is mood with approaches like Rust's epocs or "strict mode".
Standard libraries should include everything needed to interact with modern systems. This means HTTP parsing, HTTP requests, and JSON parsing. Some laguages are excellent (like python), while some are half way there (like go), and some are just broken (Rust).
External libraries are for niche or specialized functionality. External libraries are not for functionality that is used by most modern software. To put your head in the ground and insist otherwise is madness and will lead to ridiculous outcomes like this.
32 replies →
>Some 19 year old hacked together a new front end framework last week, better use it in prod because why not.
The thing is, you don't have to be this undiscerning to end up with tons of packages.
Let's init a default next.js project. How many dependencies are there?
react, react-dom, next, typescript, @types/node, @types/react, @types/react-dom.
OK so 7... seems like a lot in some sense but its still missing many reasonable dependencies. Some sort of styling solution (tailwind, styled components, etc). Some sort of http client or graphql. And more. But lets just use the base dependencies as an example. Is 7 so bad? Maybe, maybe not, but you need to go deeper. How many packages are there?
55. What are they? I have no idea, go read the lock file I guess.
All of this while being pretty reasonable.
Java + Spring Boot BOM + Maven Central (signed jars) does fit the description.
I agree, it always seems to be NPM, and there's a reason for that.
I don’t recall hearing about constant supply chain attacks with CPAN
4 replies →
This comes across as not being self-aware as to why security as laughed out of rooms: I read this as you correctly identifying some risks and said only offered the false-dichotomouy of solutions of "risk" and "no risk" without talking middle grounds between the two or finding third-ways that break the dichotomy.
I could just be projecting my own bad experiences with "security" folks (in quotes as I can't speak to their qualifications). My other big gripe is when they don't recongnize UX as a vital part of security (if their solution is unsuable, it won't be used).
This is how our security lead is. "I've identified X as a vulnerability, recommended remediation is to remove it." "We literally can't." He pokes around finding obscure vulnerabilities and recommends removing business critical software, yet we don't have MFA, our servers and networking UIs are on the main VLAN accessable by anyone, we have no tools to patch third party software, and all of our root passwords are the same. We bring real security concerns to him like this, and they just get backlogged because his stupid tools he runs only detect software vulns. It's insanity.
I've been a web developer for over two decades. I have specific well-tested solutions for avoiding external JS dependencies. Despite that, I have the exact same experience as the above security guy. Most developers love adding dependencies.
1 reply →
At my previous enterprise we had a saying:
Security: we put the ‘no’ in ‘innovation’.
I've always been very careful about dependencies, and freezing them to versions that are known to work well.
I was shocked when I found out that at some of the most profitable shops, most of their code is just a bunch of different third-party libraries badly cobbled together, with only a superficial understanding of how those libraries work.
Your proposed solution does not work for web applications built with node packages.
Essentials tools such as Jest add 300 packages on their own.
You already have hundreds to thousands of packages installed, fretting over a few more for that DatePicker or something is pretty much a waste of time.
Agree on the only solution being reducing dependencies.
Even more weird in the EU where things like Cyber Resilience Act mandate patching publicly known vulnerabilities. Cool, so let's just stay up2date? Supply-chain vuln goes Brrrrrr
The post you replied to suggested a real solution to the problem. It was implemented in my current org years ago (after log4j) and we have not been affected by any of the malware dependencies that has happened since.
> You are an obstruction for making profit.
This explains a lot. Really, this is the great reason of why the society is collapsing as we speak.
"There should be no DRM in phones" - "You Are An Obstruction To Making Profit".
"People should own their devices, we must not disallow custom software on it" - "YAAOTMP"
"sir, the application will weigh 2G and do almost nothing yet, should we minify it or use different framework?" - "YAAOTMP".
"Madame, this product will cost too much and require unnecessary payments" - "YAAOTMP"
Etc. etc. Like in this "Silicon Valley" comedy series. But for real, and affecting us greatly.
Death comes to corp CEO, he screams YAAOTMP, death leaves shocked. Startup CEO watches the scene. His jedi sword turns from blue to red.
Package registries should step up. They are doing some stuff but still NPM could do more.
Personally, I go further than this and just never update dependencies unless the dependency has a bug that affects my usage of it. Vulnerabilities are included.
It is insane to me how many developers update dependencies in a project regularly. You should almost never be updating dependencies, when you do it should be because it fixes a bug (including a security issue) that you have in your project, or a new feature that you need to use.
The only time this philosophy has bitten me was in an older project where I had to convince a PM who built some node project on their machine that the vulnerability warnings were not actually issues that affected our project.
Edit: because I don't want to reply to three things with the same comment - what are you using for dependencies where a) you require frequent updates and b) those updates are really hard?
Like for example, I've avoided updating node dependencies that have "vulnerabilities" because I know the vuln doesn't affect me. Rarely do I need to update to support new features because the dependency I pick has the features I need when I choose to use it (and if it only supports partial usage, you write it yourself!). If I see that a dependency frequently has bugs or breakages across updates then I stop using it, or freeze my usage of it.
Then you run the risk of drifting so much behind that when you actually have to upgrade it becomes a gargantuan task. Both ends of the scale have problems.
That's why there's an emphasis on stability. If things works fine, don't change. If you're applying security patches, don't break the API.
In NPM world, there's so much churn that it would be comical if not for the security aspects.
That's only a problem for you, the developer, though, and is merely an annoyance about time spent. And it's all stuff you had to do anyway to update--you're just doing it all at once instead of spread out over time. A supply chain malware attack is a problem for every one of your users--who will all leave you once the dust is settled--and you end up in headline news at the top of HN's front page. These problems are not comparable. One is a rough day. The other is the end of your project.
3 replies →
counterpoint, if the runtime itself (nodejs) has a critical issue, you haven't updated for years, you're on an end-of-life version, and you cannot upgrade because you have dependencies that do not support the new version of the runtime, you're in for a painful day. The argument for updating often is that when you -are- exposed to a vulnerability that you need a fix for, it's a much smaller project to revert or patch that single issue.
Otherwise, I agree with the sentiment that too many people try to update the world too often. Keeping up with runtime updates as often as possible (node.js is more trusted than any given NPM module) and updating only when dependencies are no longer compatible is a better middle ground.
The same logic you used for runtimes also applies to libraries. Vulnerabilities are found in popular JS libraries all the time. The surface area is, of course, smaller than that of a runtime like Node.js, but there is still lots of potential for security issues with out-of-date libraries.
There really is no good solution other than to reduce the surface area for vulnerabilities by reducing the total amount of code you depend on (including third-party code). In practice, this means using as few dependencies as possible. If you only use one or two functions from lodash or some other helper library, you're probably better off writing or pulling in those functions directly instead.
Fully disagree. The problem is that when you do need to upgrade, either for a bug fix, security fix, or new feature that you need/want, it's a lot easier to upgrade if your last upgrade was 3 months ago than if it was 3 years ago.
This has bitten me so many times (usually at large orgs where policy is to be conservative about upgrades) that I can't even consider not upgrading all my dependencies at least once a quarter.
yeah, I typically start any substantial development work with getting things up to date so you're not building on something you'll find out is already broken when you do get around to that painful upgrade.
this seems to me to be trading one problem that might happen for one that is guaranteed: a very painful upgrade. Maybe you only do it once in a while but it will always suck.
The problem here is that there might be a bug fix or even security fix that is not backported to old versions, and you suddenly have to update to a much newer version in a short time
That works fine if you have few dependencies (obviously this is a good practice) and you have time to vet all updates and determine whether a vulnerability impacts your particular code, but that doesn’t scale if you’re a security organization at, say, a small company.
Dependency hell exists at both ends. Too quick can bite you just as much as being too slow/lazy.
The article explicitly mentions a way to do this:
Use NPM Package Cooldown Check
The NPM Cooldown check automatically fails a pull request if it introduces an npm package version that was released within the organization’s configured cooldown period (default: 2 days). Once the cooldown period has passed, the check will clear automatically with no action required. The rationale is simple - most supply chain attacks are detected within the first 24 hours of a malicious package release, and the projects that get compromised are often the ones that rushed to adopt the version immediately. By introducing a short waiting period before allowing new dependencies, teams can reduce their exposure to fresh attacks while still keeping their dependencies up to date.
This attack was only targeting user environments.
Having secrets in a different security context, like root/secretsuser-owned secret files only accessible by the user for certain actions (the simplest way would be eg. sudoers file white listing a precise command like git push), which would prevent arbitrary reads of secrets.
The other part of this attack, creating new github actions, is also a privilege, normal users dont need to exercise that often or unconstrained. There are certainly ways to prevent/restrict that too.
All this "was a supply chain attack" fuzz here is IMO missing the forest for the trees. Changing the security context for these two actions is easier to implement than supply chain analysis and this basic approach is more reliable than trusting the community to find a backdoor before you apply the update. Its security 101. Sure, there are post-install scripts that can attack the system but that is a whole different game.
That's a feature of stepsecurity though, it's not built-in.
This is basically what I recommended people do with windows updates back when MS gave people a choice about when/if to install them, with shorter windows for critical updates and much longer ones for low priority updates or ones that only affected things they weren't using.
And hope there isn’t some recently patched zero-day RCE exploit at the same time.
> sort of a "delayed" mode to updating my own dependencies. The idea is that when I want to update my dependencies, instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago.
For Python's uv, you can do something like:
> uv lock --exclude-newer $(date --iso -d "2 days ago")
Awesome tip, thanks!
oh that uv lock is neat, i am going to give that a go
pnpm just added this: https://pnpm.io/blog/releases/10.16
This sounds nice in theory, but does it really solve the issue? I think that if no one's installing that package then no one is noticing the malware and no one is reporting that package either. It merely slightly improves the chances that author would notice a version they didn't release, but this doesn't work if author is not particularly actively working the compromised project.
These days compromised packages are often detected automatically by software that scans all packages uploaded to npm like https://socket.dev or https://snyk.io. So I imagine it's still useful to have those services scan these packages first, before they go out to the masses.
Measures like this also aren't meant to be "final solutions" either, but stop-gaps. Slowing the spread can still be helpful when a large scale attack like this does occur. But I'm also not entirely sure how much that weighs against potentially slowing the discovery as well.
Ultimately this is still a repository problem and not a package manager one. These are merely band-aids. The responsibility lies with npm (the repository) to implement proper solutions here.
> The responsibility lies with
No, it doesn't solve the issue, but it probably helps.
And I agree that if everyone did this, it would slow down finding issues in new releases. Not really sure what to say to that... aside from the selfish idea that if I do it, but most other people don't, it won't affect me.
a long enough delay would solve the issue for account takeovers, and bold attacks like this.
It would not solve for a bad actor gaining trust over years, then contributing seemingly innocent code that contains an exploitable bug with enough plausible deniability to remain on the team after it is patched.
minimumReleaseAge is pretty good! Nice!!
I do wish there were some lists of compromised versions, that package managers could disallow from.
there's apparently an npm RFC from 2022 proposing a similar (but potentially slightly better?) solution https://github.com/npm/rfcs/issues/646
bun is also working on it: https://github.com/oven-sh/bun/issues/22679
Aren't they found quickly because people upgrade quickly?
this btw would also solve social media. if only accounts required a month waiting period before they could speak.
You can switch to the mentioned "delayed" mode if you're using pnpm. A few days ago, pnpm 10.16 introduced a minimumReleaseAge setting that delays the installation of newly released dependencies by a configurable amount of time.
https://pnpm.io/blog/releases/10.16
> sort of a "delayed" mode
That's the secret lots of enterprises have relied on for ages. Don't be bleeding edge, let the rest of the world gineau pig the updates and listen for them to sound the alarm if something's wrong. Obviously you do still need to pay attention to the occasional, major, hot security issues and deal with them in a swift fashion.
Another good practice is to control when your updates occur - time them when it's ok to break things and your team has the bandwidth to fix things.
This is why I laughed hard when Microsoft moved to aggressively push Windows updates and the inevitable borking it did to people's computers at the worst possible times ("What's that you said? You've got a multi-million dollar deliverable pitch tomorrow and your computer won't start due to a broken graphics driver update?). At least now there's a "delay" option similar to what you described, but it still riles me that update descriptions are opaque (so you can't selectively manage risk) and you don't really have the degree of control you ought to.
pnpm just added minimum age for dependencies https://pnpm.io/blog/releases/10.16#new-setting-for-delayed-...
From your link:
> In most cases, such attacks are discovered quickly and the malicious versions are removed from the registry within an hour.
By delaying the infected package availability (by "aging" dependencies), we're only delaying the time, and reducing samples, until it's detected. Infections that lay dormant are even more dangerous than explosives ones.
The only benefit would be if, during this freeze, repository maintainers were successfully pruning malware before it hits the fan, and the freeze would give scanners more time to finish their verification pipelines. That's not happening afaik, NPM is crazy fast going from `npm publish` to worldwide availability, scanning is insufficient by many standards.
Afaict many of these recent supply chain attacks _have_ been detected by scanners. Which ones flew under the radar for an extended period of time?
From what I can tell, even a few hours of delay for actually pulling dependencies post-publication to give security tools a chance to find it would have stopped all (?) recent attacks in their tracks.
Thank god, adopting this immediately. Next I’d like to see Go-style minimum version selection instead.
Oh brilliant. I've been meaning to start migrating my use to pnpm; this is the push I needed.
When using Go, you don't get updated indirect dependencies until you update a direct dependency. It seems like a good system, though it depends on your direct dependencies not updating too quickly.
The auto-updating behaviour dependencies because of the `^` version prefix is the root problem.
It's best to never use `^` and always specify exact version, but many maintainers apparently can't be bothered with updating their dependencies themselves so it became the default.
Maybe one approach would be to pin all dependencies, and not use any new version of a package until it reaches a certain age. That would hopefully be enough time for any issues to be discovered?
People living on the latest packages with their dependabots never made any sense to me, ADR. They trusted their system too much
If you don't review the pinned versions, it makes no difference.
Packages can still be updated, even if pinned. If a dependency of a dependency is not pinned - it can still be updated.
Use less dependencies :)
And larger dependencies that can be trusted in larger blocks. I'll bet half of a given projects dependencies are there to "gain experience with" or be able to name drop that you've used them.
Less is More.
We used to believe that. And then W3C happened.
Stick to (pin) old stable versions, don't upgrade often. Pain in the butt to deal with eventual minimum-version-dependency limitations, but you don't get the brand new releases with bugs. Once a year, get all the newest versions and figure out all the weird backwards-incompatible bugs they've introduced. Do it over the holiday season when nobody's getting anything done anyway.
> One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies.
You can do it:
https://github.blog/changelog/2025-07-01-dependabot-supports...
https://docs.renovatebot.com/configuration-options/#minimumr...
https://www.stepsecurity.io/blog/introducing-the-npm-package...
If your employer paid your dependencies' verified authors to provide them licensed and signed software, you wouldn't have to rely on a free third party intermediary with a history of distributing massive amounts of malware for your security.
> As a user of npm-hosted packages in my own projects, I'm not really sure what to do to protect myself. It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies, and so on. Even if I had the time to do that, I'm not a typescript/javascript expert, and I'm certain there are a lot of obfuscated things that an attacker could do that I wouldn't realize was embedded malware.
I think Github's Dependabot can help you here. You can also host your own little instance of DependencyTrack and keep up to date with vulnerabilities.
> One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies.
You can do this with npm (since version 6.9.0).
To only get registry deps that are over a week old:
Source: Darcy Clarke - https://bsky.app/profile/darcyclarke.me/post/3lyxir2yu6k2s
I like to pin specific versions in my package.json so dependencies don't change without manual steps, and use "npm ci" to install specifically the versions in package-lock.json. My CI runs "npm audit" which will raise the alarms if a vulnerability emerges in those packages. With everything essentially frozen there either is malware within it, or there is not going to be, and the age of the packages softly implies there is not.
> instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago
The problem with this approach is you need a certain number of guinea pigs on the bleeding edge or the outcome is the same (just delayed). There is no way for anyone involved to ensure that balance is maintained. Reducing your surface area is a much more effective strategy.
Not necessarily, some supply chain compromises are detected within a day by the maintainers themselves, for example by their account being taken over. It would be good to mitigate those at least.
In that specific scenario, sure; but I don't think that's a meaningful guardrail for a business.
I think it definitely couldn’t hurt. You’re right it doesn’t eliminate the threat of supply chain attacks, but it would certainly reduce them and wouldn’t require much effort to implement (either manually or via script). You’re basically giving maintainers and researchers time to identify new malware and patch or unrelease them before you’re exposed. Just make sure you still take security patches.
Rather than the user doing that "delay" installation, it would be a good idea if the package repository (i.e. NPM) actually enforced something like that.
For example, whenever a new version of a package is released, it's published to the repository but not allowed to be installed for at least 48 hours, and this gives time to any third-party observers to detect a malware early.
I recently started using npm for an application where there’s no decent alternative ecosystem.
The signal desktop app is an electron app. Presumably it has the same problem.
Does anyone know of any reasonable approaches to using npm securely?
“Reduce your transitive dependencies” is not a reasonable suggestion. It’s similar to “rewrite all the Linux kernel modules you need from scratch” or “go write a web browser”.
Most big tech companies maintain their own NPM registry that only includes approved packages. If you need a new package available in that registry you have to request it. A security team will then review that package and its deps and add it to the list of approved packages…
I would love to have something like that "in the open"…
A debian version of NPM? I've seen a lot of hates on Reddit and other places about Debian because the team focuses on stability. When you look at the project, it's almost always based on Rust or Python.
> “Reduce your transitive dependencies” is not a reasonable suggestion. It’s similar to “rewrite all the Linux kernel modules you need from scratch” or “go write a web browser”.
Oh please, do not compare writing bunch of utilities for you "app" with writing a web browser.
This is where distributed code audits come in, you audit what you can, others audit what they can, and the overlaps of many audits gives you some level of confidence in the audited code.
https://github.com/crev-dev/
> I'm not really sure what to do
You need an EDR and code repo scanner. Exploring this as a technical problem of the infrastructure will accomplish. The people that create these systems are long gone and had/have huge gaps in their capabilities to stop creating these problems.
npm shrinkwrap and then check in your node_modules folder. Don't have each developer (or worse, user) individually run npm install.
It's common among grizzled software engineering veterans to say "Check in the source code to all of your dependencies, and treat it as if it were your own source code." When you do that, version upgrades are actual projects. There's a full audit trail of who did what. Every build is reproducible. You have full visibility into all code that goes into your binary, and you can run any security or code maintenance tools on all of it. You control when upgrades happen, so you don't have a critical dependency break your upcoming project.
You can use Sonatype or Artifactory as an self-hosted provider for your NPM packages that keep their own NPM repository. This way you can delay and control updates. It is common enterprise practice.
I update my deps once a year or when I specifically need to. That helps a bit. Though it upsets the security theatre peeps at work who just blindly think dependabot issues means I need to change dependencies.
I never understood the "let's always pin everything to the latest version and let's update the pinned versions every day"… what is even the point of this exercise? Might as well not pin at all.
Don't update your dependencies manually. Setup renovate to do it for you, with a delay of at least a couple of weeks, and enable vulnerability alerts so that it opens PRs for publicly known vulnerabilities without delay
https://docs.renovatebot.com/configuration-options/#minimumr...
https://docs.renovatebot.com/presets-default/#enablevulnerab...
Why was this comment downvoted? Please explain why you disagree.
I didn’t downvote, but...
Depending on a commercial service is out of the question for most open source projects.
3 replies →
>It's not feasible for me to audit every single one of my dependencies
Perhaps I’m just ignorant of web development, but why not? We do so with our desktop software.
Average .net core desktop complex app may have a dozen dependencies if it get to that point. Average npm todo list may have several thousand if not more
Not you. But one would expect major cybersecurity vendors such as Crowdstrike to screen their dependencies, yet they are all over the affected list.
It looks like they actually got infected as well. So it's not only that, their security practices seem crap
Lot of software has update policies like this and then also people will run a separate test environment updating to latest
Install less dependencies, code more.
Sure, and I do that whenever I can. But I'm not going to write my own react, or even my own react-hook-form. I'm not going to rewrite stripe-js. Looking through my 16 direct dependencies -- that pull in a total of 653 packages, jesus christ -- there's only one of them that I'd consider writing myself (js-cookie) in order to reduce my dependency count. The rest would be a maintenance burden that I shouldn't have to take on.
There's this defense mechanism that I don't know how it's called, but when someone takes a criticism to the extreme to complain about it being unfeasible.
Criticism: "You should shower every day"
Defense: "OH, maybe I should shower every hour, to the point where my skin dries and I can't get my work done because I'm in the shower all day."
No, there's a pretty standard way of doing things that you can care to learn, and it's very feasible, people shower every day during the week, sometimes they skip if they don't go out during weekends, if it's very cold you can skip a day, and if it's hot you can even shower twice. You don't even need to wash your hair every day. There's nuance that you can learn if you stop being so defeatist about it.
Similarly, you can of course install stripe-js since it's vendored from a paid provider with no incentive to fuck you with malware and with resources to audit dependency code, at any rate they are already a dependency of yours, so adding an npm package does not add a vendor to your risk profile.
Similarly you can add react-hook-form if it's an official react package, however if it isn't, then it's a risk, investigate who uploads it, if it's a random from github with an anime girl or furry image in their profile, maybe not. Especially if the package is something like an unofficial react-mcp-dotenv thing where it has access to critical secrets.
Another fallacy is that you have to rewrite the whole dependency you would otherwise import. False. You are not going to write a generic solution for all use cases, just for your own, and it will be tightly integrated and of higher quality and less space (which helps with bandwidth, memory and CPU caching), because of it. For god's sake, you used an example relating to forms? We've had forms since the dot com boom, how come you are still having trouble with those? You should know them like the back of your hand.
4 replies →
React has zero dependencies and Stripe has one... What else do you need?
Copy-paste more.
I guess this is a joke, but imo it shouldn't be.
2 replies →
wonder how long for llms to spew the malware in those packages along the code when you request the same functionality.
would you pay a subscription for a vetted repo?
If you pull something into your project, you're responsible for it working. Full stop. There are a lot of ways to manage/control dependencies. Pick something that works best for you, but be aware, due diligence, like maintenance is ultimately your responsibility.
Oh I'm well aware, and that's the problem. Unfortunately none of the available options hit anything close to the sweet spot that makes me comfortable.
I don't think this is a particularly unreasonable take; I'm a relative novice to the JS ecosystem, and I don't feel this uncomfortable taking on dependencies as I do in pretty much any other ecosystem I participate in, even those (like Rust) where the dependency counts can be high.
Acknowledging your responsibility doesn't make the problem go away. It's still better to have extra layers of protection.
I acknowledge that it is my responsibility to drive safely, and I take that responsibility seriously. But I still wear a seat belt and carry auto insurance.
That's very naive. We can do better than this.
Almost all software has a no warranty clause. I am not a lawyer but in pretty plain English every piece of software I have ever used has said exactly that I can fuck off if I expect it to work or do anything.
To clarify - I dont think it is naive to assume the software is as-is with all responsibilities on the user since that is exactly what lawyers have made all software companies say that for over 50 years.
8 replies →