Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised
2 months ago (socket.dev)
A lot of blogs on this are AI generated and such as this is developing, so just linking to a bunch of resources out there:
Socket:
- Sep 15 (First post on breach): https://socket.dev/blog/tinycolor-supply-chain-attack-affect...
- Sep 16: https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
StepSecurity – https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-p...
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
As a user of npm-hosted packages in my own projects, I'm not really sure what to do to protect myself. It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies, and so on. Even if I had the time to do that, I'm not a typescript/javascript expert, and I'm certain there are a lot of obfuscated things that an attacker could do that I wouldn't realize was embedded malware.
One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies. The idea is that when I want to update my dependencies, instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago. As a maintainer, I could decide that a package that's been out in the wild for at least 6 weeks is less likely to have unnoticed malware in it than one that was released just yesterday.
Obviously this is not a perfect fix, as there's no guarantee that the delay time I specify is enough for any particular package. And I'd want the tool to present me with options sometimes: e.g. if my current version of a dep has a vulnerability, and the fix for it came out a few days ago, I might choose to update to it (better eliminate the known vulnerability than refuse to update for fear of an unknown one) rather than wait until it's older than my threshold.
> It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies
I think this is a good argument for reducing your dependency count as much as possible, and keeping them to well-known and trustworthy (security-wise) creators.
"Not-invented-here" syndrome is counterproductive if you can trust all authors, but in an uncontrolled or unaudited ecosystem it's actually pretty sensible.
Have we all forgotten the left-pad incident?
This is an eco system that has taken code reuse to the (unreasonable) extreme.
When JS was becoming popular, I’m pretty sure every dev cocked an eyebrow at the dependency system and wondered how it’d be attacked.
43 replies →
If it's not feasible to audit every single dependency, it's probably even less feasible to rewrite every single dependency from scratch. Avoiding that duplicated work is precisely why we import dependencies in the first place.
55 replies →
>> and keeping them to well-known and trustworthy (security-wise) creators.
The true threat here isn't the immediate dependency though, it's the recursive supply chain of dependencies. "trustworthy" doesn't make any sese either when the root cause is almost always someone trustworthy getting phished. Finally if I'm not capable of auditing the dependencies it's unlikely I can replace them with my own code. That's like telling a vibe coder the solution to their brittle creations is to not use AI and write the code themselves.
3 replies →
"A little copying is better than a little dependency" -- Go proverb (also applies to other programming languages)
IMO, one thing I like in npm packages is that that usually they are small, and they should ideally converge towards stability (frozen)...
If they are not, something is bad and the dependency should be "reduced" if at all possible.
Exactly.
I always tried to keep the dependencies to a minimum.
Another thing you can do is lock versions to a year ago (this is what linux distros do) and wait for multiple audits of something, or lack of reports in the wild, before updating.
2 replies →
> I think this is a good argument for reducing your dependency count as much as possible, and keeping them to well-known and trustworthy (security-wise) creators.
I wonder to which extent is the extreme dependency count a symptom of a standard library that is too minimalistic for the ecosystem's needs.
Perhaps this issue could be addressed by a "version set" approach to bundling stable npm packages.
1 reply →
Easier said than done when your ecosystem of choice took the Unix philosophy of doing one thing well, misinterpreted it and then drove it off a cliff. The dependency tree of a simple Python service is incomparable to a Node service of similar complexity.
As a security guy, for years, you get laughed out of the room suggesting devs limit their dependencies and don't download half of the internet while building. You are an obstruction for making profit. And obviously reading the code does very little since modern (and especially Javascript) code just glues together frameworks and libraries, and there's no way a single human being is going to read a couple million lines of code.
There are no real solutions to the problem, except for reducing exposure somewhat by limiting yourself to a mostly frozen subset of packages that are hopefully vetted more stringently by more people.
The "solution" would be using a language with a strong standard library and then having a trusted 3rd party manually audit any approved packages.
THEN use artifactory on top of that.
That's boring and slow though. Whatever I want my packages and I want them now. Apart of the issue is the whole industry is built upon goodwill and hope.
Some 19 year old hacked together a new front end framework last week, better use it in prod because why not.
Occasionally I want to turn off my brain and just buy some shoes. The Timberland website made that nearly impossible last week. When I gave up on logging in for free shipping and just paid full price, I get an email a few days later saying they ran out of shoes.
Alright. I guess Amazon is dominant for a reason.
41 replies →
This comes across as not being self-aware as to why security as laughed out of rooms: I read this as you correctly identifying some risks and said only offered the false-dichotomouy of solutions of "risk" and "no risk" without talking middle grounds between the two or finding third-ways that break the dichotomy.
I could just be projecting my own bad experiences with "security" folks (in quotes as I can't speak to their qualifications). My other big gripe is when they don't recongnize UX as a vital part of security (if their solution is unsuable, it won't be used).
3 replies →
At my previous enterprise we had a saying:
Security: we put the ‘no’ in ‘innovation’.
I've always been very careful about dependencies, and freezing them to versions that are known to work well.
I was shocked when I found out that at some of the most profitable shops, most of their code is just a bunch of different third-party libraries badly cobbled together, with only a superficial understanding of how those libraries work.
Your proposed solution does not work for web applications built with node packages.
Essentials tools such as Jest add 300 packages on their own.
You already have hundreds to thousands of packages installed, fretting over a few more for that DatePicker or something is pretty much a waste of time.
Agree on the only solution being reducing dependencies.
Even more weird in the EU where things like Cyber Resilience Act mandate patching publicly known vulnerabilities. Cool, so let's just stay up2date? Supply-chain vuln goes Brrrrrr
The post you replied to suggested a real solution to the problem. It was implemented in my current org years ago (after log4j) and we have not been affected by any of the malware dependencies that has happened since.
> You are an obstruction for making profit.
This explains a lot. Really, this is the great reason of why the society is collapsing as we speak.
"There should be no DRM in phones" - "You Are An Obstruction To Making Profit".
"People should own their devices, we must not disallow custom software on it" - "YAAOTMP"
"sir, the application will weigh 2G and do almost nothing yet, should we minify it or use different framework?" - "YAAOTMP".
"Madame, this product will cost too much and require unnecessary payments" - "YAAOTMP"
Etc. etc. Like in this "Silicon Valley" comedy series. But for real, and affecting us greatly.
1 reply →
Package registries should step up. They are doing some stuff but still NPM could do more.
Personally, I go further than this and just never update dependencies unless the dependency has a bug that affects my usage of it. Vulnerabilities are included.
It is insane to me how many developers update dependencies in a project regularly. You should almost never be updating dependencies, when you do it should be because it fixes a bug (including a security issue) that you have in your project, or a new feature that you need to use.
The only time this philosophy has bitten me was in an older project where I had to convince a PM who built some node project on their machine that the vulnerability warnings were not actually issues that affected our project.
Edit: because I don't want to reply to three things with the same comment - what are you using for dependencies where a) you require frequent updates and b) those updates are really hard?
Like for example, I've avoided updating node dependencies that have "vulnerabilities" because I know the vuln doesn't affect me. Rarely do I need to update to support new features because the dependency I pick has the features I need when I choose to use it (and if it only supports partial usage, you write it yourself!). If I see that a dependency frequently has bugs or breakages across updates then I stop using it, or freeze my usage of it.
Then you run the risk of drifting so much behind that when you actually have to upgrade it becomes a gargantuan task. Both ends of the scale have problems.
5 replies →
counterpoint, if the runtime itself (nodejs) has a critical issue, you haven't updated for years, you're on an end-of-life version, and you cannot upgrade because you have dependencies that do not support the new version of the runtime, you're in for a painful day. The argument for updating often is that when you -are- exposed to a vulnerability that you need a fix for, it's a much smaller project to revert or patch that single issue.
Otherwise, I agree with the sentiment that too many people try to update the world too often. Keeping up with runtime updates as often as possible (node.js is more trusted than any given NPM module) and updating only when dependencies are no longer compatible is a better middle ground.
1 reply →
Fully disagree. The problem is that when you do need to upgrade, either for a bug fix, security fix, or new feature that you need/want, it's a lot easier to upgrade if your last upgrade was 3 months ago than if it was 3 years ago.
This has bitten me so many times (usually at large orgs where policy is to be conservative about upgrades) that I can't even consider not upgrading all my dependencies at least once a quarter.
1 reply →
this seems to me to be trading one problem that might happen for one that is guaranteed: a very painful upgrade. Maybe you only do it once in a while but it will always suck.
The problem here is that there might be a bug fix or even security fix that is not backported to old versions, and you suddenly have to update to a much newer version in a short time
That works fine if you have few dependencies (obviously this is a good practice) and you have time to vet all updates and determine whether a vulnerability impacts your particular code, but that doesn’t scale if you’re a security organization at, say, a small company.
Dependency hell exists at both ends. Too quick can bite you just as much as being too slow/lazy.
The article explicitly mentions a way to do this:
Use NPM Package Cooldown Check
The NPM Cooldown check automatically fails a pull request if it introduces an npm package version that was released within the organization’s configured cooldown period (default: 2 days). Once the cooldown period has passed, the check will clear automatically with no action required. The rationale is simple - most supply chain attacks are detected within the first 24 hours of a malicious package release, and the projects that get compromised are often the ones that rushed to adopt the version immediately. By introducing a short waiting period before allowing new dependencies, teams can reduce their exposure to fresh attacks while still keeping their dependencies up to date.
This attack was only targeting user environments.
Having secrets in a different security context, like root/secretsuser-owned secret files only accessible by the user for certain actions (the simplest way would be eg. sudoers file white listing a precise command like git push), which would prevent arbitrary reads of secrets.
The other part of this attack, creating new github actions, is also a privilege, normal users dont need to exercise that often or unconstrained. There are certainly ways to prevent/restrict that too.
All this "was a supply chain attack" fuzz here is IMO missing the forest for the trees. Changing the security context for these two actions is easier to implement than supply chain analysis and this basic approach is more reliable than trusting the community to find a backdoor before you apply the update. Its security 101. Sure, there are post-install scripts that can attack the system but that is a whole different game.
1 reply →
That's a feature of stepsecurity though, it's not built-in.
This is basically what I recommended people do with windows updates back when MS gave people a choice about when/if to install them, with shorter windows for critical updates and much longer ones for low priority updates or ones that only affected things they weren't using.
And hope there isn’t some recently patched zero-day RCE exploit at the same time.
> sort of a "delayed" mode to updating my own dependencies. The idea is that when I want to update my dependencies, instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago.
For Python's uv, you can do something like:
> uv lock --exclude-newer $(date --iso -d "2 days ago")
Awesome tip, thanks!
oh that uv lock is neat, i am going to give that a go
pnpm just added this: https://pnpm.io/blog/releases/10.16
This sounds nice in theory, but does it really solve the issue? I think that if no one's installing that package then no one is noticing the malware and no one is reporting that package either. It merely slightly improves the chances that author would notice a version they didn't release, but this doesn't work if author is not particularly actively working the compromised project.
3 replies →
minimumReleaseAge is pretty good! Nice!!
I do wish there were some lists of compromised versions, that package managers could disallow from.
there's apparently an npm RFC from 2022 proposing a similar (but potentially slightly better?) solution https://github.com/npm/rfcs/issues/646
bun is also working on it: https://github.com/oven-sh/bun/issues/22679
Aren't they found quickly because people upgrade quickly?
this btw would also solve social media. if only accounts required a month waiting period before they could speak.
You can switch to the mentioned "delayed" mode if you're using pnpm. A few days ago, pnpm 10.16 introduced a minimumReleaseAge setting that delays the installation of newly released dependencies by a configurable amount of time.
https://pnpm.io/blog/releases/10.16
> sort of a "delayed" mode
That's the secret lots of enterprises have relied on for ages. Don't be bleeding edge, let the rest of the world gineau pig the updates and listen for them to sound the alarm if something's wrong. Obviously you do still need to pay attention to the occasional, major, hot security issues and deal with them in a swift fashion.
Another good practice is to control when your updates occur - time them when it's ok to break things and your team has the bandwidth to fix things.
This is why I laughed hard when Microsoft moved to aggressively push Windows updates and the inevitable borking it did to people's computers at the worst possible times ("What's that you said? You've got a multi-million dollar deliverable pitch tomorrow and your computer won't start due to a broken graphics driver update?). At least now there's a "delay" option similar to what you described, but it still riles me that update descriptions are opaque (so you can't selectively manage risk) and you don't really have the degree of control you ought to.
pnpm just added minimum age for dependencies https://pnpm.io/blog/releases/10.16#new-setting-for-delayed-...
From your link:
> In most cases, such attacks are discovered quickly and the malicious versions are removed from the registry within an hour.
By delaying the infected package availability (by "aging" dependencies), we're only delaying the time, and reducing samples, until it's detected. Infections that lay dormant are even more dangerous than explosives ones.
The only benefit would be if, during this freeze, repository maintainers were successfully pruning malware before it hits the fan, and the freeze would give scanners more time to finish their verification pipelines. That's not happening afaik, NPM is crazy fast going from `npm publish` to worldwide availability, scanning is insufficient by many standards.
1 reply →
Thank god, adopting this immediately. Next I’d like to see Go-style minimum version selection instead.
Oh brilliant. I've been meaning to start migrating my use to pnpm; this is the push I needed.
When using Go, you don't get updated indirect dependencies until you update a direct dependency. It seems like a good system, though it depends on your direct dependencies not updating too quickly.
The auto-updating behaviour dependencies because of the `^` version prefix is the root problem.
It's best to never use `^` and always specify exact version, but many maintainers apparently can't be bothered with updating their dependencies themselves so it became the default.
Maybe one approach would be to pin all dependencies, and not use any new version of a package until it reaches a certain age. That would hopefully be enough time for any issues to be discovered?
People living on the latest packages with their dependabots never made any sense to me, ADR. They trusted their system too much
1 reply →
Packages can still be updated, even if pinned. If a dependency of a dependency is not pinned - it can still be updated.
Use less dependencies :)
And larger dependencies that can be trusted in larger blocks. I'll bet half of a given projects dependencies are there to "gain experience with" or be able to name drop that you've used them.
Less is More.
We used to believe that. And then W3C happened.
Stick to (pin) old stable versions, don't upgrade often. Pain in the butt to deal with eventual minimum-version-dependency limitations, but you don't get the brand new releases with bugs. Once a year, get all the newest versions and figure out all the weird backwards-incompatible bugs they've introduced. Do it over the holiday season when nobody's getting anything done anyway.
> One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies.
You can do it:
https://github.blog/changelog/2025-07-01-dependabot-supports...
https://docs.renovatebot.com/configuration-options/#minimumr...
https://www.stepsecurity.io/blog/introducing-the-npm-package...
If your employer paid your dependencies' verified authors to provide them licensed and signed software, you wouldn't have to rely on a free third party intermediary with a history of distributing massive amounts of malware for your security.
> As a user of npm-hosted packages in my own projects, I'm not really sure what to do to protect myself. It's not feasible for me to audit every single one of my dependencies, and every one of my dependencies' dependencies, and so on. Even if I had the time to do that, I'm not a typescript/javascript expert, and I'm certain there are a lot of obfuscated things that an attacker could do that I wouldn't realize was embedded malware.
I think Github's Dependabot can help you here. You can also host your own little instance of DependencyTrack and keep up to date with vulnerabilities.
> One thing I was thinking of was sort of a "delayed" mode to updating my own dependencies.
You can do this with npm (since version 6.9.0).
To only get registry deps that are over a week old:
Source: Darcy Clarke - https://bsky.app/profile/darcyclarke.me/post/3lyxir2yu6k2s
I like to pin specific versions in my package.json so dependencies don't change without manual steps, and use "npm ci" to install specifically the versions in package-lock.json. My CI runs "npm audit" which will raise the alarms if a vulnerability emerges in those packages. With everything essentially frozen there either is malware within it, or there is not going to be, and the age of the packages softly implies there is not.
> instead of updating to the absolute latest version available of everything, it updates to versions that were released no more than some configurable amount of time ago
The problem with this approach is you need a certain number of guinea pigs on the bleeding edge or the outcome is the same (just delayed). There is no way for anyone involved to ensure that balance is maintained. Reducing your surface area is a much more effective strategy.
Not necessarily, some supply chain compromises are detected within a day by the maintainers themselves, for example by their account being taken over. It would be good to mitigate those at least.
1 reply →
I think it definitely couldn’t hurt. You’re right it doesn’t eliminate the threat of supply chain attacks, but it would certainly reduce them and wouldn’t require much effort to implement (either manually or via script). You’re basically giving maintainers and researchers time to identify new malware and patch or unrelease them before you’re exposed. Just make sure you still take security patches.
Rather than the user doing that "delay" installation, it would be a good idea if the package repository (i.e. NPM) actually enforced something like that.
For example, whenever a new version of a package is released, it's published to the repository but not allowed to be installed for at least 48 hours, and this gives time to any third-party observers to detect a malware early.
I recently started using npm for an application where there’s no decent alternative ecosystem.
The signal desktop app is an electron app. Presumably it has the same problem.
Does anyone know of any reasonable approaches to using npm securely?
“Reduce your transitive dependencies” is not a reasonable suggestion. It’s similar to “rewrite all the Linux kernel modules you need from scratch” or “go write a web browser”.
Most big tech companies maintain their own NPM registry that only includes approved packages. If you need a new package available in that registry you have to request it. A security team will then review that package and its deps and add it to the list of approved packages…
I would love to have something like that "in the open"…
1 reply →
> “Reduce your transitive dependencies” is not a reasonable suggestion. It’s similar to “rewrite all the Linux kernel modules you need from scratch” or “go write a web browser”.
Oh please, do not compare writing bunch of utilities for you "app" with writing a web browser.
This is where distributed code audits come in, you audit what you can, others audit what they can, and the overlaps of many audits gives you some level of confidence in the audited code.
https://github.com/crev-dev/
> I'm not really sure what to do
You need an EDR and code repo scanner. Exploring this as a technical problem of the infrastructure will accomplish. The people that create these systems are long gone and had/have huge gaps in their capabilities to stop creating these problems.
npm shrinkwrap and then check in your node_modules folder. Don't have each developer (or worse, user) individually run npm install.
It's common among grizzled software engineering veterans to say "Check in the source code to all of your dependencies, and treat it as if it were your own source code." When you do that, version upgrades are actual projects. There's a full audit trail of who did what. Every build is reproducible. You have full visibility into all code that goes into your binary, and you can run any security or code maintenance tools on all of it. You control when upgrades happen, so you don't have a critical dependency break your upcoming project.
You can use Sonatype or Artifactory as an self-hosted provider for your NPM packages that keep their own NPM repository. This way you can delay and control updates. It is common enterprise practice.
I update my deps once a year or when I specifically need to. That helps a bit. Though it upsets the security theatre peeps at work who just blindly think dependabot issues means I need to change dependencies.
I never understood the "let's always pin everything to the latest version and let's update the pinned versions every day"… what is even the point of this exercise? Might as well not pin at all.
Don't update your dependencies manually. Setup renovate to do it for you, with a delay of at least a couple of weeks, and enable vulnerability alerts so that it opens PRs for publicly known vulnerabilities without delay
https://docs.renovatebot.com/configuration-options/#minimumr...
https://docs.renovatebot.com/presets-default/#enablevulnerab...
Why was this comment downvoted? Please explain why you disagree.
4 replies →
>It's not feasible for me to audit every single one of my dependencies
Perhaps I’m just ignorant of web development, but why not? We do so with our desktop software.
Average .net core desktop complex app may have a dozen dependencies if it get to that point. Average npm todo list may have several thousand if not more
Not you. But one would expect major cybersecurity vendors such as Crowdstrike to screen their dependencies, yet they are all over the affected list.
It looks like they actually got infected as well. So it's not only that, their security practices seem crap
Lot of software has update policies like this and then also people will run a separate test environment updating to latest
Install less dependencies, code more.
Sure, and I do that whenever I can. But I'm not going to write my own react, or even my own react-hook-form. I'm not going to rewrite stripe-js. Looking through my 16 direct dependencies -- that pull in a total of 653 packages, jesus christ -- there's only one of them that I'd consider writing myself (js-cookie) in order to reduce my dependency count. The rest would be a maintenance burden that I shouldn't have to take on.
6 replies →
Copy-paste more.
4 replies →
would you pay a subscription for a vetted repo?
If you pull something into your project, you're responsible for it working. Full stop. There are a lot of ways to manage/control dependencies. Pick something that works best for you, but be aware, due diligence, like maintenance is ultimately your responsibility.
Oh I'm well aware, and that's the problem. Unfortunately none of the available options hit anything close to the sweet spot that makes me comfortable.
I don't think this is a particularly unreasonable take; I'm a relative novice to the JS ecosystem, and I don't feel this uncomfortable taking on dependencies as I do in pretty much any other ecosystem I participate in, even those (like Rust) where the dependency counts can be high.
Acknowledging your responsibility doesn't make the problem go away. It's still better to have extra layers of protection.
I acknowledge that it is my responsibility to drive safely, and I take that responsibility seriously. But I still wear a seat belt and carry auto insurance.
That's very naive. We can do better than this.
9 replies →
This happens because there's no auditing of new packages or versions. The distro's maintainer and the developer is the same person.
The general solution is to do what Debian does.
Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Keep a testing/unstable distro where new packages and new versions can be added, but even then added only by the distro maintainer, NOT by the package developers. This is where the audits happen.
NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories.
This is a culture issue with developers who find it OK to have hundreds of (transitive) dependencies, and then follow processes that, for all intents and purposes, blindly auto update them, thereby giving hundreds of third-parties access to their build (or worse) execution environments.
Adding friction to the sharing of code doesn't absolve developers from their decision to blindly trust a ridiculous amount of third-parties.
I find that the issue is much more often not updating dependencies often enough with known security holes, than updating too often and getting hit with a supply-chain malware attack.
8 replies →
It's not unreasonable to trust large numbers of trustworthy dependency authors. What we lack are the institutions to establish trust reliably.
If packages had to be cryptographically signed by multiple verified authors from a per-organization whitelist in order to enter distribution, that would cut down on the SPOF issue where compromising a single dev is enough to publish multiple malware-infested packages.
7 replies →
> This is a culture issue with developers who find it OK to have hundreds of (transitive) dependencies, and then follow processes that, for all intents and purposes, blindly auto update them
I do not know about NPM. But in Rust this is common practice.
Very hard to avoid. The core of Rust is very thin, to get anything done typically involves dozens of crates, all pulled in at compile time from any old developer implicitly trusted.
2 replies →
See auto update bots on Github. https://docs.github.com/en/code-security/dependabot/dependab... And since Github does it, it must be a good thing, right? Right???
Unfortunately that's almost the whole industry. Every software project I've seen has an uncountable amount of dependencies. No matter if npm, cargo, go packages, whatever you name.
18 replies →
>absolve developers
Doesn't this ultimately go all the way up to the top?
You have 2 devs: one who mostly writes their own code, only uses packages that are audited etc; the other uses packages willy nilly. Who do you think will be hired? Who do you think will be able to match the pace of development that management and executives demand?
Rather than adding friction there is something else that could benefit from having as little friction as sharing code: publishing audits/reviews.
Be that as it may, a system that can fail catastrophically will. Security shouldn't be left to choice.
There is another related growing problem in my recent observation. As a Debian Developer, when I try to audit upstream changes before pulling them in to Debian, I find a huge amount of noise from tooling, mostly pointless. This makes it very difficult to validate the actual changes being made.
For example, an upstream bumps a version of a lint tool and/or changes style across the board. Often these are labelled "chore". While I agree it's nice to have consistent style, in some projects it seems to be the majority of the changes between releases. Due to the difficulty in auditing this, I consider this part of the software supply chain problem and something to be discouraged. Unless there's actually reason to change code (eg. some genuine refactoring a human thinks is actually needed, a bug fix or new feature, a tool exposed a real bug, or at least some identifiable issue that might turn into a bug), it should be left alone.
I agree with this and it's something I've encountered when just trying to understand a codebase or track down a bug. There's a bit of the tail wagging the dog as an increasing proportion of commits are "meta-code" that is just tweaking config, formatting, etc. to align with external tools (like linters).
> Unless there's actually reason to change code (eg. some genuine refactoring a human thinks is actually needed, a bug fix or new feature, a tool exposed a real bug, or at least some identifiable issue that might turn into a bug), it should be left alone.
The corollary to this is "Unless there's actually a need for new features that a new version provides, your existing dependency should be left alone". In other words things should not be automatically updated. This is unfortunately the crazy path we've gone down, where when Package X decides to upgrade, everyone believes that "the right thing to do" is for all its dependencies to also update to use that and so on down the line. As this snowballs it becomes difficult for any individual projects to hold the line and try to maintain a slow-moving, stable version of anything.
I'm using difftastic, it cuts down a whole lot of the noise
https://difftastic.wilfred.me.uk/
4 replies →
In Rust we have cargo vet, where we share these audits and use them in an automated fashion. Companies like Google and Mozilla contribute their audits.
I wish cargo went with crev instead, that has a much better model for distributed code audits.
https://github.com/crev-dev/
It's too bad MS doesn't own npm, and/or GitHub repositories. Wait
2 replies →
And it's a great idea, similar thematically to certificate transparency
How to backport security fixes to vetted packages?
I'd like to think there are ways to do this and keep things decentralized.
Things like: Once a package has more than [threshold] daily downloads for an extended period of time, it requires 2FA re-auth/step-up on two separate human-controlled accounts to approve any further code updates.
Or something like: for these popular packages, only a select list of automated build systems with reproducible builds can push directly to NPM, which would mean that any malware injector would need to first compromise the source code repository. Which, to be fair, wouldn't necessarily have stopped this worm from propagating entirely, but would have slowed its progress considerably.
This isn't a "sacrifice all of NPM's DX and decentralization" question. This is "a marginally more manual DX only when you're at a scale where you should be release-managing anyways."
> two separate human-controlled accounts to approve any further code updates.
Except most projects have 1 developer… Plus, if I develop some project for free I don't want to be wasting time and work for free for large rich companies. They can pay up for code reviews and similar things instead of adding burden to developers!
I think that we should impose webauthn 2fa on all npm accounts as the only acceptable auth method if you have e.g., more than 1 million total downloads.
Someone could pony up the cash to send out a few thousand yubikeys for this and we'd all be a lot safer.
13 replies →
You can use debian's version of your npm packages if you'd like. The issues you're likely to run into are: some libraries won't be packaged period by debian; those that are might be on unacceptably old versions. You can work around these issues by vendoring dependencies that aren't in your distro's repo, ie copying a particular version into your own source control, manually keeping up with security updates. This is, to my knowledge, what large tech companies do. Other companies that don't are either taking a known risk with regards to vulnerabilities, or are ignorant. Ignorance is very common in this industry.
Distros are struggling with the amount of packages they have to maintain and update regularly. That's one of the main reasons why languages built their own ecosystems in the first place. It became popular with CPAN and Maven and took off with Ruby gems.
Linux distros can't even provide all the apps users want, that's why freshmeat existed and we have linuxbrew, flatpak, Ubuntu multiverse, PPA, third party Debian repositories, the openSUSE Buildservice, the AUR, ...
There is no community that has the capacity to audit and support multiple branches of libraries.
The lack of an easy method to automatically pull in and manage dependencies in C/C++ is starting to look a lot more like a feature than a bug now.
Author of Odin is also against adding a package manager: https://www.gingerbill.org/article/2025/09/08/package-manage...
But there's so much UB in C++ that can be exploited that I doubt attackers lament the lack of a module system to target. ;)
To be clear, Debian does not audit code like you might be suggesting they do. There are checks for licensing, source code being missing, build reproducibility, tests and other things. There is some static analysis with lintian, but not systematically at the source code level with tools like cppcheck or rust-analyzer or similar. Auditing the entirety of the code for security issues just isn't feasible for package maintainers. Malware might be noticed while looking for other issues, that isn't guaranteed though, the XZ backdoor wasn't picked up by Debian.
https://lintian.debian.org/
> The general solution is to do what Debian does.
The problem with this approach is that frameworks tend to "expire" pretty quickly and you can't run anything for too long on Debian until the framework is obsolete. What I mean by obsolete is Debian 13 ships with Golang 1.24, A year from now it's gonna be Golang 1.26 - that is not being made available in trixie. So you have to find an alternative source for the latest golang deb. Same with PHP, Python etc. If you run them for 3 years with no updated just some security fixes here and there, you're gonna wake up in a world of hurt when the next stable release comes out and you have to do en-masse updates that will most likely require huge refactoring because syntax, library changes and so on.
And Javascript is a problem all by itself where versions come up every few months and packages are updated weekly or monthly. You can't run any "modern" app with old packages unless you accept all the bugs or you put in the work and fix them.
I am super interested in a solution for this that provides some security for packages pushed to NPM (the most problematic repository). And for distributions to have a healthy updated ecosystem of packages so you don't get stuck who knows for how long on an old version of some package.
And back to Debian, trixie ships with nginx 1.26.3-3+deb13u1. Why can't they continuously ship the latest stable version if they don't want to use the mainline one?
> Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Unfortunately most people don't want old software that doesn't support newer hardware so most people don't end up using Debian stable.
It'd be interesting to see how much of the world runs on Debian containers, where most of the whole "it doesn't support my insert consumer hardware here" argument is completely moot.
I don't know why you went with hardware.
Most people don't want old software because they don't want old software.
They want latest features, fixes and performance improvements.
What hardware isn't supported by Debian stable that is supported by unstable?
Or is this just a "don't use Linux" gripe?
1 reply →
Enable the Backport sources. The recent kernels there have supported all my modern personal devices.
> Unfortunately most people don't want old software
"old" is a strange way to spell "new, unstable, and wormed".
I want old software. Very little new features are added to most things i care about, mostly it is just bloat, AI slop, and monthly subscription shakedowns being added to software today.
So, who is going to audit the thousands of new packages/versions that are published to npm every day? It only works for Debian because they hand-pick popular software.
Maybe NPM should hand pick popular packages and we should get away from this idea of every platform should always let everyone publish. Curation is expensive, but it may be worthwhile for mature platforms.
This is maybe where we could start getting into money into the opensource ecosystems.
One idea I've had is that publishing is open as today, but security firms could offer audit signatures.
So a company might pay security firms and only accept updates to packages that have been audited by by 1,2,3 or more of their paid services.
Thus money would be paid in the open to have eyes on changes for popular packages and avoid the problem of that weird lone maintainer in northern Finland being attacked by the Chinese state.
Errr, you! If you brought the dependency, it is now your job to maintain it and diff every update for backdoor.
I've been arguing a couple of times that the 2 main reasons people want package management in languages are
1. Using an operating system with no package management 2. Poor developer discipline, i.e. developers always trying to use the latest version of a package.
So now we have lots of poorly implemented language package managers, docker containers on top being used as another package management layer (even though that's not their primary purpose but many people use the like that) and the security implications of pulling in lots of random dependencies without any audit.
Developing towards a stable base like Debian would not be a pancea, but alliviate the problems by at least placing another audit layer in between.
Nope. It's because:
1. You don't want to tie your software to the OS. Most people want their software to be cross-platform. Much better to have a language-specific package manager because I'm using the same language on every OS. And when I say "OS" here, I really mean OS or Linux distro, because Linux doesn't have one package manager.
2. OS package managers (where they even exist), have too high a bar of entry. Not only do you have to make a load of different packages for different OSes and distros, but you have to convince all of them to accept them. Waaay too much work for all but the largest projects.
You're probably going to say "Good! It would solve this problem!", but I don't think the solution to package security is to just make it so annoying nobody bothers. We can do better than that.
1 reply →
It doesn't matter if the operating system I personally use has a good package manager, I need to release it in a form that all the people using it can work with. There are a lot of OSes out there, with many package managers.
Even if we make every project create packages in every package manager, it still wouldn't add any auditing.
Exactly, in a way Debian (or any other distro) is an extended standard library.
Yeah, after seeing all of the crazy stuff that has been occurring around supply chain attacks, and realizing that latest Debian stable (despite the memes) already has a lot of decent relatively up-to-date packages for Python, it's often easier to default to just building against what Debian provides.
Right. Like NPM, Debian also supports post-install hooks for its packages. Not great (ask Michael Stapelberg)! But this is still a bit better than the NPM situation because at least the people writing the hooks aren't the people writing the applications, and there's some standards for what is considered sane to do with such hooks, and some communal auditing of those hooks' behavior.
Linux distros could still stand to improve here in a bunch of ways, and it seems that a well-designed package ecosystem truly doesn't need such hooks at the level of the package manager at all. But this kind of auditing is one of the useful functions of downstream software distros for sure.
> NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories
Can you point me to Go's centralized package repository?
https://pkg.go.dev/
Which works together with
https://proxy.golang.org/
1 reply →
https://github.com/
13 replies →
Pretty unfeasible with the variety of packages/ecosystems that get created. You'd either ending up requiring a LOT of dev time looking over packages on the maintainer end, or basically having no packages people need to use in your repository.
Finding the balance of that seems to me like it'd be incredibly difficult.
In practice, my experience is that this ends up with only old versions of things in the stable package repos. So many times I run into a bug, and then find out that the bug has been fixed in a newer version but it isn't updated in the stable repo. So now you end up pulling an update out of band, and you are in the same boat as before.
I don't know how you avoid this problem
You're overestimating the amount of auditing these distros do for the average package; in reality there is very little.
The reason these compromised packages typically don't make it in to e.g. Debian is because this all tends to be discovered quite quickly, before the package maintainer has a chance to update it.
security updates and bugfixes only
Just wondering: while this is less of an attack surface, it's still a surface?
NX NPM attack (at least the previous wave which targetted tinycolor) relied on running post-install scripts. Go tooling does not give you ways to run post-install scripts, which is much more reasonable approach.
> The general solution is to do what Debian does.
If you ask these people, distributions are terrible and need to die.
Python even removed PGP signatures from Pypi because now attestation happens by microsoft signing your build on the github CI and uploading it directly to pypi with a never expiring token. And that's secure, as opposed to the developer uploading locally from their machine.
In theory it's secure because you see what's going in there on git, but in practice github actions are completely insecure so malware has been uploaded this way already.
For python I use Debian packages wherever possible. What I need is in there usually. I might even say almost always.
Go’s package repository is just GitHub.
At the end of the day, it’s all a URL.
You’re asking for a blessed set of URLs. You’d have to convince someone to spend time maintaining that.
As hair splitting, that's actually not true: Go's package manager is just version control of which GitHub is currently the most popular hosting. And it also allows redirecting to your own version control via `go mod edit -replace` which leaves the sourcecode reference to GitHub intact, but will install it from wherever you like
2 replies →
Golang at least gives you the option to easily vendor-ize packages to your local repository. Given what has happened here, maybe we should start doing this more!
4 replies →
The problem with your idea is that you need to find the person who wants to do all this auditing of every version of Node/Python/Ruby libraries.
I believe good centralized infrastructure for this would be a good start. It could be "gamified" and reviewers could earn reputation for reviewing packages, common packages would be reviewed all the time.
Kinda like Stackoverflow for reviews, with optional identification and such.
And honestly an LLM can strap a "probably good" badge on things with cheap batch inference.
1 reply →
> suffer from this problem
Benefit from this feature.
I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem. Vendoring can mitigate your immediate exposure, but does not solve this problem.
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
Traditional JS is actually among the safest environments ever created. Every day, billions of devices run untrusted JS code, and no other platform has seen sandboxed execution at such scale. And in nearly three decades, there have been very few incidents of large successful attacks on browser engines. That makes the JS engine derived from browsers the perfect tool to build a server side framework out of.
However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Sandboxing doesn't do any good if the malicious code and target data are in the same sandbox, which is the whole point of these supply-chain attacks.
5 replies →
> Traditional JS is actually among the safest environments ever created.
> However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Traditional JS is the reason we have all of these problems around NodeJS and npm. It's a lot better than it was, but a lot of JS tooling came up in the time when ES5 and older were the standard, and to call those versions of the language lacking is... charitable. There were tons of things that you simply couldn't count on the language or its standard library to do right, so a culture of hacks and bandaids grew up around it. Browser disparities didn't help either.
Then people said, "Well, why don't we all share these hacks and bandaids so that we don't have to constantly reinvent the wheel?", and that's sort of how npm got its start. And of course, it was the freewheeling days of the late 00s/early 10s, when you were supposed to "move fast and break things" as a developer, so you didn't have time to really check if any of this was secure or made any sense. The business side wanted the feature and they wanted it now.
The ultimate solution would be to stop slapping bandaids and hacks on the JS ecosystem by making a better language but no one's got the resolve to do that.
1 reply →
Javascript doesn't have a standard library, until it does the 170 million[1] weekly downloads of packages like UUID will continue. You can't expect people to re-write everything over and over.
[1]https://www.npmjs.com/package/uuid
12 replies →
None of those security guarantees matter when you take out the sandbox, which is exactly what server-side JS does.
The isolated context is gone and a single instance of code talking to an individual client has access to your entire database. It’s a completely different threat model.
3 replies →
I think the smallest C library I’ve seen was a single file to include on your project if you want terminal control like curses on windows. A lot of libraries on npm (and cargo) should be gist or a blog post.
2 replies →
Interestingly AI should be able to help a lot with desire to load those snippets.
What I'm wondering if it would help the ecosystem, if you were able to rather load raw snippets into your codebase, and source control as opposed to having them as dependencies.
So e.g. shadcn component pasting approach.
For things like leftPad, cli colors and others you would just load raw typescript code from a source, and there you would immediately notice something malicious or during code reviews.
You would leave actual npm packages to only actual frameworks / larger packages where this doesn't make sense and expect higher scrutiny, multi approvals of releases there.
> I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem.
I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.
What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).
Could you say more about what mitigations you’re thinking of?
I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.
(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)
[1]: https://github.blog/changelog/2025-07-31-npm-trusted-publish...
1 reply →
Most people have addressed the package registry side of NPM.
But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.
So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.
Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.
21 replies →
I agree other repos deserve a good look for potential mitigations as well (PyPI too, has a history of publishing malicious packages).
But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.
8 replies →
Which mitigations specifically are in npm but not in crates.io?
As far as I know crates.io has everything that npm has, plus
- strictly immutable versions[1]
- fully automated and no human in the loop perpetual yanking
- no deletions ever
- a public and append only index
Go modules go even further and add automatic checksum verification per default and a cryptographic transparency log.
Contrast this with docker hub for example, where not even npm's basic properties hold.
So, it is more like
docker hub ⊂ npm ⊂ crates.io ⊂ Go modules
[1] Nowadays npm has this arguably too
2 replies →
I mostly agree. But NPM is special, in that the exposure is so much higher. The hypothetical python+htmx web app might have 10s of dependencies (including transitive) whereas your typical Javascript/React will have 1000s. All an attacker needs to do is find one of many packages like TinyColor or Leftpad or whatever and now loads of projects are compromised.
23 replies →
Most of the biggest repositories already cooperate through the OpenSSF[0]. Last time I was involved in it, there were representatives from npm, PyPI, Maven Central, Crates and RubyGems. There's also been funding through OpenSSF's Alpha-Omega program for a bunch of work across multiple ecosystems[1], including repos.
[0] https://github.com/ossf/wg-securing-software-repos
[1] https://alpha-omega.dev/grants/grantrecipients/
The Rust folks are in denial about this
Until you go get malware
Supply chain attacks happen at every layer where there is package management or a vector onto the machine or into the code.
What NPM should do if they really give a shit is start requiring 2FA to publish. Require a scan prior to publish. Sign the package with hard keys and signature. Verify all packages installed match signatures. Semver matching isn’t enough. CRC checks aren’t enough. This has to be baked into packages and package management.
> Until you go get malware
While technically true, I have yet to see Go projects importing thousands of dependencies. They may certainly exist, but are absolutely not the rule. JS projects, however...
We have to realize, that while supply chain attacks can happen everywhere, the best mitigations are development culture and solid standard library - looking at you, cargo.
I am a JS developer by trade and I think that this ecosystem is doomed. I absolutely avoid even installing node on my private machine.
3 replies →
Sign the package with hard keys and signature.
That's really the core issue. Developer-signed packages (npm's current attack model is "Eve doing a man-in-the-middle attack between npm and you," which is not exactly the most common threat here) and a transparent key registry should be minimal kit for any package manager, even though all, or at least practically all, the ecosystems are bereft of that. Hardening API surfaces with additional MFA isn't enough; you have to divorce "API authentication" from "cryptographic authentication" so that compromising one doesn't affect the other.
3 replies →
> What NPM should do if they really give a shit is start requiring 2FA to publish.
How does 2FA prevent malware? Anyone can get a phone number to receive a text or add an authenticator to their phone.
I would argue a subscrption model for 1 EUR/month would be better. The money received could pay for certification of packages and the credit card on file can leverage the security of the payments system.
How will multi-factor-authentication prevent such a supply chain issue?
That is, if some attacker create some dummy trivial but convenient package and 2 years latter half the package hub depends on it somehow, the attacker will just use its legit credential to pown everyone and its dog. This is not even about stilling credentials. It’s a cultural issue with bare blind trust to use blank check without even any expiry date.
https://en.wikipedia.org/wiki/Trust,_but_verify
2 replies →
If NPM really cared, they'd stop recommending people use their poorly designed version control system that relies on late-fetching third-party components required by the build step, and they'd advise people to pick a reliable and robust VCS like Git for tracking/storing/retrieving source code objects and stick to that. This will never happen.
NPM has also been sending out nag emails for the last 2+ years about 2FA. If anything, that constituted an assist in the attack on the Junon account that we saw a couple weeks ago.
2 replies →
NPM does require 2FA to publish. I would love a workaround! Isn't it funny that even here on HN, misinformation is constantly being spread?
4 replies →
They are. Any language that depends heavily on package managers and lacks a standard lib is vulnerable to this.
At some point people need to realize and go back to writing vanilla js, which will be very hard.
The rust ecosystem is also the same. Too much dependence on packages.
An example of doing it right is golang.
The solution is not to go back to vanilla JS, it's for people to form a foundation and build a more complete utilities library for JS that doesn't have 1000 different dependencies, and can be trusted. Something like Boost for C++, or Apache Commons for Java.
1 reply →
Python and Rust both have decent std lib, but it is just a matter of time before this happens in thoae ecosystems. There is nothing unique about this specific attack that could only happen in JavaScript.
>and go back to writing vanilla js
Lists of things that won't happen. Companies are filled with node_modules importers these days.
Even worse, now you have to check for security flaws in that JS that's been written by node_modules importers.
That or there could someone could write a standard library for JS?
Some of us are fortunate to have never left vanilla JS.
Of course that limits my job search options, but I can't feel comfortable signing off on any project that includes more dependencies than I can count at a glance.
C#, Java, and so on.
Is the difference between the number of dev dependencies for eg. VueJs (a JavaScript library for marshalling Json Ajax responses into UI) and Htmx (a JavaScript library for marshalling html Ajax responses into UI) meaningful?
There is a difference, but it's not an order of magnitude and neither is a true island.
Granted, deciding not to use JS on the server is reasonable in the context of this article, but for the client htmx is as much a js lib with (dev) dependencies as any other.
https://github.com/bigskysoftware/htmx/blob/master/package.j...
https://github.com/vuejs/core/blob/main/package.json
Except that htmx's recommended usage is as a single <script> injected directly into your HTML page, not as an npm dependency. So unless you are an htmx contributor you are not going to be installing the dev dependencies.
1 reply →
AFAICT, the only thing this attack relies on, is the lack of scrutiny by developers when adding new dependencies.
Unless this lack of scrutiny is exclusive to JavaScript ecosystem, then this attack could just as well have happened in Rust or Golang.
I don't know Go, but Rust absolutely has the same problem, yes. So does Python. NPM is being discussed here, because it is the topic of the article, but the issue is the ease with which you can pull in unvetted dependencies.
Languages without package managers have a lot more friction to pull in dependencies. You usually rely on the operating system and its package-manager-humans to provide your dependencies; or on primitive OSes like Windows or macOS, you package the dependencies with your application, which involves integrating them into your build and distribution systems. Both of those involve a lot of manual, human effort, which reduces the total number of dependencies (attack points), and makes supply-chain issues like this more likely to be noticed.
The language package managers make it trivial to pull in dozens or hundreds of dependencies, straight from some random source code repository. Your dependencies can add their own dependencies, without you ever knowing. When you have dozens or hundreds of unvetted dependencies, it becomes trivial for an attacker to inject code they control into just one of those dependencies, and then it's game over for every project that includes that one dependency anywhere in their chain.
It's not impossible to do that in the OS-provided or self-managed dependency scenario, but it's much more difficult and will have a much narrower impact.
1 reply →
There is little point in you scrutinizing new dependencies.
Many who claim to fully analyze all dependencies are probably lying. I did not see anyone in the comments sharing their actual dependency count.
Even if you depend only on Jest - Meta's popular test runner - you add 300 packages.
Unless your setup is truly minimalistic, you probably have hundreds of dependencies already, which makes obsessing over some more rather pointless.
At least in the JS world there are more people (often also more inexperienced people) who will add a dependency willy-nilly. This is due to many people starting out with JS these days.
JavaScript does have some pretty insane dependency trees. Most other languages don’t have anywhere near that level of nestedness.
25 replies →
That, and the ability to push an update without human interaction.
The blast radius is made far worse by npm having the concept of "postinstall" which allows any package the ability to run a command on the host system after it was installed.
This works for deps of deps as well, so anything in your node_modules has access to this hook.
It's a terrible idea and something that ought to be removed or replaced by something much safer.
I agree in principle, but child_process is a thing so I don't think it makes much difference. You are pwned either way if the package can ever execute code.
Simply avoiding Javascript won't cut it.
While npm is a huge and easy target, the general problem exists for all package repositories. Hopefully a supply chain attack mitigation strategy can be better than hoping attackers target package repositories you aren't using.
While there's a culture prevalent in Javascript development to ignore the costs of piling abstractions on top of abstractions, you don't have to buy into it. Probably the easiest thing to do is count transitive dependencies.
> Simply avoiding Javascript won't cut it.
But it will cut a large portion of it.
Javascript is badly over-used and over-depended on. So many websites just display text and images, but have extremely heavy javascript libraries because that's what people know and that is part of the default, and because it enables all the tracking that powers the modern web. There's no benefit to the user, and we'd be better off without these sites existing if there were really no other choice but to use javascript.
NPM does seem vastly over represented in these type of compromises, but I don't necessarily think that e.g. pypi is much better in terms of security. So you could very well be correct that NPM is just a nicer, perhaps bigger, target.
If you can sneak malware into a JavaScript application that runs in millions of browsers, that's a lot more useful that getting a some number servers running a module as part of a script, who's environment is a bit unknown.
Javascript really could do with a standard library.
> So many websites just display text and images
Eh... This over-generalises a bit. That can be said of anything really, including native desktop applications.
2 replies →
Rendering template partials server-side and fetching/loading content updates with HTMX in the browser seems like the best of all worlds at this point.
Until you need to write JavaScript?
9 replies →
> These attacks may just be the final push I needed to take server rendering (without js) more seriously
Have fun, seems like a misguided reason to do that though.
A. A package hosted somewhere using a language was compromised!
B. I am not going to program in the language anymore!
I don't see how B follows A.
Why is this inevitable? If you use only easily verifyable packages you’ve lost nothing. The whole concept of npm automatically executing postinstall scripts was fixed when my pnpm started asking me every time a new package wanted to do that.
HTMX is full of JavaScript. Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
I don't think the point is to avoid Javascript, but to avoid depending on a random number of third-parties.
> Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
As well as Ruby, Python, Go, etc.
HTMX does not have external dependencies, only dev dependencies, reducing the attack surface.
Do you count LiveView (Elixir) in that assessment?
> The HTMX folks convinced me that I can get REALLY far without any JavaScript
HTMX is JavaScript.
Unless you meant your own JavaScript.
When we say 'htmx allows us to avoid JavaScript', we mean two things: (1) we typically don't need to rely on the npm ecosystem, because we need very few (if any) third-party JavaScript libraries; and (2) htmx and HTML-first allow us to avoid writing a lot of custom JavaScript that we would have otherwise written.
This is going to become an issue for a lot of managers, not just npm. Npm is clearly a very viable target right now, though. They're going to get more and more sophisticated.
Took that route myself and I don't regret it. Now I can at least entirely avoid Node.js ecosystem.
Not for the frontend. esm modules work great nowadays with import maps.
> supply chain attacks
You all really need to stop using this term when it comes to OSS. Supply chain implies a relationship, none of these companies or developers have a relationship with the creators other than including their packages.
Call it something like "free code attacks" or "hobbyist code attacks."
“code I picked up off the side of the road”
“code I somehow took a dependency on when copying bits of someone’s package.json file”
“code which showed up in my lock file and I still don’t know how it got there”
1 reply →
I know CrowdStrike have a pretty bad reputation but calling them hobbyists is a bit rude.
1 reply →
A supply chain can have hobbyists, there's no particular definition that says everyone involved must be a professional registered business.
This vulnerability was reported to NPM in 2016: https://blog.npmjs.org/post/141702881055/package-install-scr... https://www.kb.cert.org/vuls/id/319816 but the NPM response was WAI.
Acronym expansion for those-not-in-the-know (such as me before a web search): WAI might mean "working as intented", or possibly "why?"
Thank you. It's frustrating when people uncommon acronyms without explaining them.
2 replies →
Even if we didn't have post install scripts wouldn't the malware just run as soon as you imported the module into your code during the build process, server startup, testing, etc?
I can't think of an instance where I ran npm install and didn't run some process shortly after that imported the packages.
Many people have non-JS backends and only use npm for frontend dependencies. If a postinstall script runs in a dev or build environment it could get access to a lot of things that wouldn't be available when the package is imported in a browser or other production environment.
2 replies →
NPM belongs to Microsoft. What do you expect?
NPM was acquired 4 years after that post
It's crazy to me that npm still executes postinstall scripts by default for all dependencies. Other package managers (Pnpm, Bun) do not run them for dependencies unless they are added to a specific allow-list. Composer never runs lifecycle scripts for dependencies.
This matters because dependencies are often installed in a build or development environment with access to things that are not available when the package is actually imported in a browser or other production environment.
I'm also wondering why huge scale attacks like this don't happen for other package managers.
Like, for rust, you can have a build.rs file that gets executed when your crate is compiled, I don't think it's sandboxed.
Or also on other languages that will get run on development machines, like python packages (which can trigger code only on import), java libraries, etc...
Like, there is the post install script issue or course, but I feel like these attacks could have been just as (or almost as) effective in other programming languages, but I feel like we always only hear about npm packages.
All package managers are vulnerable to this type of attack, it just happens that npm is like 10+ times more popular than the others, so it gets targeted often.
Its only JS devs that constantly rebuild their system with full dependcy update, so they are the most attractive target.
It's a lot harder to do useful things with backend languages. JavaScript is more profitable as you can do the crypto wallet attacks without having to exploit kernel zero days.
3 replies →
for the same reason that scams are kind of obvious if you care to look: use of js / npm is an automatic filter for a more clueless target.
Seems like this is a fairly recent change, for Pnpm at least, https://socket.dev/blog/pnpm-10-0-0-blocks-lifecycle-scripts...
What has been the community reaction? Has allowing scripts been scalable for users? Or could it be described as people blindly copying and pasting allow commands?
I am involved in Python packaging discussions and there is a pre-proposal (not at PEP stage yet) at the moment for "wheel variants" that involves a plugin architecture, a contentious point is whether to download and run the plugins by default. I'd like to find parallels in other language communities to learn from.
In my experience, packages which legitimately require a postinstall script to work correctly are very rare. For the apps I maintain, esbuild is the only dependency which benefits from a postinstall script to slightly improve performance (though it still works without the script). So there's no scaling issue adding one or two packages to a whitelist if desired.
It does not, since version 11:
https://docs.npmjs.com/cli/v11/using-npm/changelog#1100-pre0...
Yes it does, since the ignore-scripts option is not enabled by default.
1 reply →
When the left-pad debacle happened, one commenter here said of a well known npm maintainer something to the effect of that he's an "author of 600 npm packages, and 1200 lines of JavaScript".
Not much has changed since then. The best counter-example I know is esbuild, which is a fully featured bundler/minifier/etc that has zero external dependencies except for the Go stdlib + one package maintained by the Go project itself:
https://www.npmjs.com/package/esbuild?activeTab=dependencies
https://github.com/evanw/esbuild/blob/755da31752d759f1ea70b8...
Other "next generation" projects are trading one problematic ecosystem for another. When you study dependency chains of e.g. biomejs and swc, it looks pretty good:
https://www.npmjs.com/package/@biomejs/biome/v/latest?active...
https://www.npmjs.com/package/@swc/types?activeTab=dependenc...
Replacing the tire fire of eslint (and its hundreds to low thousands of dependencies) with zero of them! Very encouraging, until you find the Rust source:
https://github.com/biomejs/biome/blob/a0039fd5457d0df18242fe...
https://github.com/swc-project/swc/blob/6c54969d69551f516032...
I think as these projects gain more momentum, we will see similar things cropping up in the cargo ecosystem.
Does anyone know of other major projects written in as strict a style as esbuild?
Part of the reason of my switch to using Go as my primary language is that there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
These kind of projects usually are pretty great because they aim to work with CGO_ENABLED=0 so the libs are very portable and work with different syscall backends.
Additionally I really like to go mod vendor my snapshot of dependencies which is great for short term fixes, but it won't fix the cause in the long run.
However, the go ecosystem is just as vulnerable here because of lack of signing off package updates. As long as there's no verification possible end-to-end when it comes to "who signed this package" then there's no way this will get better.
Additionally most supply chaib attacks focussed on the CI/CD infrastructure in the past, because they are just as broken with just as many problems. There needs to be a better CI/CD workflow where signing keys don't have to be available on the runners themselves, otherwise this will just shift the attack surface to a different location.
In my opinion the package managers are somewhat to blame here, too. They should encourage and mandate gpg signatures, and especially in git commits when they rely on git tags for distribution.
> there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
IMO, it might be due to the fact that Go mod came rather late in the game, while NPM was introduced near the beginning of NodeJS. But it might be more related to Go's target audience being more low-level, where such tools are less ubiquitous?
15 replies →
Yes, eslint is particularly frustrating: https://npmgraph.js.org/?q=eslint
There are plenty of people in the community who would help reduce the number of dependencies, but it really requires the maintainers to make it a priority. Otherwise the only way to address it is to switch to another solution like oxlint.
I tried upgrading ESLint recently and it took me forever to fix all the dependency issues. I wish I never used ESLint prettier as now my codebase styling is locked into an ESLint config :/
5 replies →
That's like 85 dependencies, not hundreds or even thousands.
Jest pulls in 300 by the way.
The answer is to not draw in dependencies for things you are easily able to write yourself. That would probably reduce dependencies by 2/3 or so in many projects. Especially, left-pad things. If you write properly self contained small parts and a few tests, you probably don't have to touch them much, and the maintenance burden is not that high. Compare that with having to check every little dependency like left pad and all its code and its dependencies. If a dependency is not strictly necessary, then don't do it.
That's not an answer at all. Jest alone adds 300 packages.
Why don't you share with us what your project does and how many packages are present?
4 replies →
> Very encouraging, until you find the Rust source
Those are the workspace dependencies, not the dependencies of the specific crates you may use within the project. You have to actually look closer to find that out, most of `swc-` crates have shallow dependency trees.
> Does anyone know of other major projects written in as strict a style as esbuild?
As in any random major project with focus on not having dependencies? SQLite comes to mind.
The downside is now I need to know Golang to audit my JavaScript project.
And it runs a post-install: node install.js
So I do really have to trust it or read all the code.
According to Aikido Security the attack has now targeted 180+ packages: https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
I wonder who actually discovered this attack? Can we credit them? The phrasing in these posts is interesting, with some taking direct credit and others just acknowledging the incident.
Aikido says: > We were alerted to a large-scale attack against npm...
Socket says: > Socket.dev found compromised various CrowdStrike npm packages...
Ox says: > Attackers slipped malicious code into new releases...
Safety says: > The Safety research team has identified an attack on the NPM ecosystem...
Phoenix says: > Another supply chain and NPM maintainer compromised...
Semgrep says: > We are aware of a number of compromised npm packages
Mackenzie here I work for Aikido. This is a classic example of the security community all playing a part. The very first notice of this was from a developer named Daniel Pereira. He alerted Socket who did the first review of the Malware and discovered 40 packages. After, Aikido discovered an additional 147 packages and the Crowdstrike packages. I'm not sure how Step found it but they were the first to really understand the malware and that it was a self replicating worm. So multiple parties all playing a part kinda independent. Its pretty cool
question how does your product help in these situations? I imagine it'd require for someone to report a compromised package, and then you guys could detect it in my codebase?
1 reply →
Several individual developers seem to have noticed it at around the same time with Step and Socket pointing to different people in their blogs.
And then vendors from Socket, Aikido, and Step all seem to have detected it via their upstream malware detection feeds - Socket and Aikido do AI code analysis, and Step does eBPF monitoring of build pipelines. I think this was widespread enough it was noticed by several people.
Since so many vendors discovered these packages seemingly independently, you'd think that they would share those mechanisms with NPM itself so that those packages would never be published in the first place. But I guess that removes their ability to sell an "early alert" mechanism through their offerings...
NPM is owned by github/microsoft. I'm sure they could afford to buy one of these products or just build their own, but clearly security is not a thing they care about.
18 replies →
OP article says: > The incident was discovered by @franky47, who promptly notified the community through a GitHub issue.
Points to this, which does look like the first mention.
https://github.com/scttcper/tinycolor/issues/256
Usually security companies monitor CVEs and the security mailing lists. That's how they all end up releasing the blog posts at the same time. It's because they are all using the same primary source.
I set "ignore-scripts=true" (https://docs.npmjs.com/cli/v11/using-npm/config#ignore-scrip...) in npmrc(5). This changed the defaults for npm(1).
The Semgrep blog under "Additional NPM Registry Security Advice / Reducing Run Scripts" says "reducing" not "ignoring". I need to check if there are still "run scripts" even with this setting.
Also I need to check if there is the same class of vulnerabilities in other package managers I use, like emacs(1) (M-x package-install), mvn(1) (Maven, Java), clj(1) (deps.edn, Clojure), luarocks(1) (Lua), deps(1) (deps.fnl, Fennel), nbb(1) (deps.edn, Node.js babashka). Although some do not have "run scripts" feature, I need to make sure.
> Shai Hulud
Clever name... but I would have expected malware authors to be a bit less obvious. They literally named their giant worm after a giant worm.
> At the core of this attack is a ~3.6MB minified bundle.js file
Yep, even malware can be bloated. That's in the spirit of NPM I guess...
I suppose it's only a matter of time before one of these supply chain attacks unintentionally pulls in a second, unrelated supply chain attack.
fish grow to the meet the size of the fishbowl
Malwares have to follow Moore's law, tequila virus was ~2.6kb in 1991.
not quite moore's law, growth at only 1.226x per year
I think these kinds of attack would be strongly reduced if js had a strong standard library.
If it was provided, it would significantly trim dependency trees of all the small utility libraries.
Perhaps we need a common community effort to create a “distro” of curated and safe dependencies one can install safely, by analyzing the most popular packages and checking what’s common and small enough to be worth being included/forked.
> Perhaps we need a common community effort to create a “distro” of curated and safe dependencies one can install safely, by analyzing the most popular packages and checking what’s common and small enough to be worth being included/forked.
Debian is a common community effort to create a “distro” of curated and safe dependencies one can install safely.
If you want stable, tested versions of software, only getting new versions every few years:
https://packages.debian.org/stable/javascript/
If you want the newer versions of software, less tested, getting new versions continuously:
https://packages.debian.org/unstable/javascript/
Node.js has been adding APIs that make it feasible to write stuff without dependencies, it's slowly getting there.
What stuff?
https://jsr.io/@std
Ever seen XKCD #927? (https://xkcd.com/927)
Joking aside, I don't think there ever really was a lack of initiatives by entities (communities, companies, whatever) to create some sort of standard library (we typically tend to call them frameworks). There's just simply too much diversity, cultures and subcultures within the whole JavaScript sphere to ever get a global consensus on what that "standard" library then should look like. Not to mention the commercial entities with very real stakes in things they might not want to relinquish to some global unity consensus (as it may practically hurt their current bottom line).
Letting upstream authors write code that the package manager runs at install time isn't a sane thing for package managers to allow. It promotes all kinds of hacky shit and makes packages harder to work with programmatically, and it also provides this propagation vector. Packages also shouldn't have arbitrary network access at build time for both of those two same reasons!
There's been a lot of talk here about selecting and auditing dependencies, which is fine and good. But this attack and lots of other supply chain attacks would also be avoided with a better-behaved package manager. Doesn't Deno solve this? Do any other JS package managers do some common-sense sandboxing?
Yes, migration is painful. Yes, granular permissions are more annoying to figure out than anything-can-do-anything. But is either as painful as vendoring/forking your dependencies without the aid of a package manager altogether? If you're really considering just copying and pasting instead of using NPM, maybe you should also consider participating in a saner package ecosystem. If you're ready to do the one, maybe you're ready to do the other.
I try to stay as far from web development as possible in my programming career (kernel/drivers and most recently reverse engineering) so maybe I'm ill-informed here but this npm thing seems to be uniquely terrible at security and i cannot fathom why the entire web seems to be automatically downloading updates from it and pushing them into production with no oversight.
I've always worked at companies where we use third party open source libraries utilities and its true that they get less-than-ideal amount of auditing when they get updated but at least we're not constantly pushing updates of to our customers solely for the sake of using the latest version. In fact usually they're out of date by several years which is also a problem but generally there'll be a guy following the mailing lists for updates in case there's a known exploit that needs to be patched.
I think all public package registries have this problem as it's not unique to npm.
The "blind" auto updating to latest versions seems to be also an issue here, simply you cannot trust it enough as there is (seemingly) no security vetting process (I mean if you get obfuscated gibberish pushed into a relatively sanely written codebase it should ring some alarms somewhere).
Normally you'd run tests after releasing new versions of your website but you cannot catch these infected parts if they don't directly influence the behavior of your functionality.
A lot of it is just that it's at the local maximum of popularity and relative user inexperience, so it's the juiciest target.
But also, npm was very much (like js you could argue) vibed into existence in many ways, eg with the idea of a lock file (eg reproducible builds) _at all_ taking a very long time to take shape.
We got lockfiles in 2016 (yarn) and 2017 (npm), before Go, Ruby, and others; I believe python is just getting a lockfile standard approved now.
You could already specify exact versions in your package.json, same as a Gemfile, but reality is that specifying dependencies by major version or “*” was considered best practice, to always have the latest security updates. Separating version ranges from the lock files, and requiring explicit upgrades was a change in that mindset – and mostly driven by containerization rather than security or dev experience.
While npm indeed seems most vulnerable, it looks to me like the actual damage done is very small.
Some people had their crypto wallets drained I guess, but as far as I am concerned nothing of any real value was lost.
One could argue that your field saw exploits that did far more damage, no?
Velocity and the pursuit of higher numbers for the shareholders yoy; endless growth!
[dead]
I knew npm was a train wreck when I first used it years ago and it pulled in literally hundreds of dependencies for a simple app. I avoid anything that uses it like the plague.
I can tell a lot about a dev by the fact that they single out npm/js for this supply chain issue.
Lots of languages ecosystems have this problem, but it is especially prominent in JS and lies on a spectrum. For comparison, in the C/C++ ecosystem it is prominent to have libraries advertising that they have zero dependencies and header only or one common major library like Boost.
What other language ecosystems have had this happen systematically? This isn't even the first time this month!
7 replies →
The JavaScript ecosystem has a major case of import-everything disease that acts as a catalyst for supply chain attacks. left-pad as one example of many.
Just more engineering leaning than you. Actual engineers have to analyze their supply chains, and so makes sense they would be baffled by NPM dependency trees that utterly normal projects grow into in the JavaScript ecosystem.
6 replies →
That they’ve coded in more than one language?
I think it’s just that a lot of old men don’t like how popular it has become with script kiddies.
"I knew you weren't a great engineer the moment you started pulling dependencies for a simple app"
You realize my point right? People are taught to not reinvent the wheel at work (mostly for good reasons) so that's what they do, me and you included.
You ain't gonna be bothered to write html and manual manipulation, the people that will give you libraries to do so won't be bothered reimplementing parsers and file watchers, file watcher writers won't be bothered reimplementing file system utils, file system utils developers won't be bothered reimplementing structured cloning or event loops, etc, etc.
I myself just the other day had the task of converting HTML to markdown, because I don't remember whether it was Jira or Github APIs that returns comments as HTML and despite it being mostly few hours of work that would get us 90% there everybody was in favor of pulling a dependency to do so (with its own dependencies) and thus further exposing our application to those risks.
Pause, you could write an HTML to markdown library in half a day? Like, 4 hours? Or 12? Either way damn
12 replies →
So basically you live JavaScript free?
as much as i can yes.
I try to avoid JS, as it is a horrible language, by design. That does include TS, but it at least is useable, but barely - because it still tied to JS itself.
18 replies →
You can write javascript without using npm...
I mean, it's hard to avoid indirectly using things that use npm, e.g. websites or whatever. But it's pretty easy to never have to run npm on your local machine, yes.
It's probably not trivial to implement and there's already a bunch of problems that need solving (e.g., trusting keys etc.) but... I think that if we had some sort of lightweight code provenance (on top of my head commits are signed from known/trusted keys, releases are signed by known keys, installing signed packages requires verification), we could probably make it somewhat harder to introduce malicious changes.
Edit: It looks like there's already something similar using sigstore in npm https://docs.npmjs.com/generating-provenance-statements#abou.... My understanding is that its use is not widespread though and it's mostly used to verify the publisher.
I think that depends on...how are these malicious changes actually getting into these packages? It seems very mysterious to me. I wonder why npm isn't being very forthcoming about this?
Last week someone wrote a blog post saying "We dodged a bullet" because it was only a browser-based crypto wallet scrape
Guess we didn't dodge this one
We didn't really dodge a bullet. We put a bullet named 'node' in the cylinder of a revolver, spun it, pointed the gun at our head, and pulled the trigger. We just happened to be lucky enough that we got an empty chamber.
In the story about the Nx compromise a few weeks ago someone posted a neat script that uses bubblewrap on Linux to run tools like npm more safely by confining their filesystem access. https://news.ycombinator.com/item?id=45034496
I modified the script slightly based on some of the comments in the thread and my own usage patterns:
Put this in `~/.local/bin` and symlink it to `~/.local/bin/npm` and `~/.local/bin/yarn` (and make sure `~/.local/bin` is first in your `$PATH`). I've been using it to wrap npm and yarn successfully in a few projects. This will protect you against some attacks that use postinstall scripts to do nefarious things outside the project.
Is exactly why I composed bubblewrap-based sandbox-venv for Python: https://github.com/kernc/sandbox-venv
Dangerous times we live in.
> --bind "${PWD}" "${PWD}"
Pardon my ignorance, but couldn't a malicious actor just redefine $PWD before calling a npm script?
The above script wraps npm. PWD gets evaluated before npm is called (so PWD is expanded in the "outside" environment).
Of course, if your malicious actor has access to your environment already, they can redefine PWD, but that's assuming you're already compromised. This bwrap script is to avoid that malicious actor running malicious install scripts in the first place.
However, I don't think it protects you against stuff like `npm install compromised-executable && node_modules/.bin/execute-compromised-executable` – then you'd have to bwrap that second call as well. Or just bwrap bash to get a limited shell.
2 replies →
Is there any way to install CLI tools from npmjs without being affected by a recent compromise?
Rust has `cargo install --locked`, which will use the pinned versions of dependencies from the lockfile, and these lockfiles are published for bin packages to crates.io.
But it seems npmjs doesn't allow publishing lockfiles, neither for libraries nor for CLI tools, so if you try to install let's say @google/gemini-cli, it will just pull the latest dependencies that fit the constraints in package.json. Is that true? Is it really this bad? If you try to install a CLI tool on a bad day when half of npmjs is compromised, you're out of luck?
How is that acceptable at all?
Lock files wouldn't work if they were locking transitive dependencies; otherwise the version solver would not have any work to actually do and you'd have many, many versions of the same package rather than a few versions that satisfy all of the version range constraints.
Lots of good ideas since last week, the one I like most being that published packages, especially those that are high in download count, don't actually go publish for a while until after publishing, allowing security scanners to do their thing.
In the Rust ecosystem, you only publish lock files for binary crates. So yeah then you get churn like https://github.com/cargo-bins/cargo-binstall/releases/tag/v1... bumping transitive deps, but this churn/noise doesn't exist for library crates - because the lock file isn't published for them.
2 replies →
npm will use your lockfile if it’s present, otherwise yeah it’s pretty much whatever is tagged and latest at the time (and the version doesn’t even have to change). If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies. The bigger issue here is that npm has such unrestricted and unsupervised access to the entire environment at all.
> If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies.
I'm asking in the context of installing a single CLI tool into ~/bin or something. There's no requirement to satisfy all dependencies, because the only dependency I care about is that one CLI tool. All I want is an equivalent of what `cargo install --locked` does — use the top-level lockfile of the CLI tool itself.
3 replies →
We've seen many reports of supply chain attacks affecting NPM. Are these symptoms of operational complexity, which can affect any such service, or is there something fundamentally wrong with NPM?
It's actually relatively simple.
Adding dependencies comes with advantages and downsides. You need to strike a balance between them. External libraries can help implement things that you better don't implement yourself, so the answer is certainly not "no dependencies". But there are downsides and risks, and the risks grow with the number of dependencies.
In the world of NPM, people think those simple truths don't apply to them and the downsides and risks of dependencies can be ignored. Then you end up with thousands of transitive dependencies.
They're wrong and learn it the hard way now.
You can't put this all on the users. The JS/node/npm projects have been mismanaged since the start.
node should have shipped "batteries included" after the left-pad incident. There was a boneheaded attachment to small stdlib, which you could put down to youthful innocence, except that it's been almost 10 years.
The TC39 committee which controls the design of JS stdlib and the node maintainers basically both act like the other one doesn't exist.
NPM was never designed with security in mind. It's a dirty hack that somehow became the most popular package manager.
The dependency hell is a reflection of the massive egos of the people involved in the multiple organizations. Python doesn't have this problem because it's all centralized under one org with a single vision.
1 reply →
Apparently Maven has 61.9M indexed packages. As Java has a decent standard lib, mini libs like leftpad are not contributing to this count. NPM has 3.1M packages. Many are trivially simple. Those stats would suggest that NPM has disproportionately more issues than other services.
I would argue that is only one of the many issues with the JS/TS/NPM ecosystem. Many of the other problems have been normalized. The constant security issues are highly visible.
> Apparently Maven has 61.9M indexed packages.
Where did you see that number? Maven central says it has about 18 million [1] packages. Maybe with all versions of those 18 million packages there are about 62 million artifacts?
While the Java ecosystem is vastly larger, in Java (with Maven, Gradle, Bazel, etc.) it is not common to use really small libraries. So you end up with vastly less transitive dependencies in your projects.
[1] https://mvnrepository.com/repos/central
1 reply →
On Maven, I restrict packages to Spring and Apache. As opposed to NPM, where even big vendors can depend on hundreds of small ones.
1 reply →
There is a guy (ljharb) who is literally on TC39 - JavaScript specification committee - who is maintaining like 600 packages full of polyfills/dependencies/utilities.
It's just javascript being javascript.
There was a huge uproar about that guy specifically and deep dependency graphs in general a year ago. A lot has already changed for lots of the popular frameworks and libraries. Dependency graphs are already much slimmer. The cultural change is happening, but we can't expect it to happen all at once.
That wouldn't be a problem if there was proper package signing and the polyfill packages were hosted under a package namespace owned by the javascript specification committee.
Irrelevant here. You use eslint-plugin-import with its 60 dependencies; One dependency or 60 is irrelevant because you only need one token: his. They're all his packages.
The problem with that guy is that the dependencies are useless to everyone except his ego.
It's just where the users and the juicy targets are.
NPM packages are used by huge Electron apps like Discord, Slack, VS Code, the holy grail would be to somehow slip something inside them.
It's both that and a culture of installing a myriad of constantly-updating, tiny libraries to do basic utility functions. (Not even libraries, they're more like individual pages in individual books).
In our line-of-business .NET app, we have a logger, a database, a unit tester, and a driver for some specialty hardware. We upgrade to the latest version of each external dependency about once per year (every major version) to avoid accruing tech debt. They're all pinned and locally hosted, nuget exists but we (like most .Net developers) don't use it to the extent that npm devs do. We read the changelogs - all four of them! - and manually update.
I understand that the NPM ecosystem works differently from a "batteries included" .Net environment for a desktop app, but it's not just about where the users are. Line of business code in .Net and Java apps process a lot of important data. Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data, but again, it's less about the existence of a package manager and more about when and how you use it.
2 replies →
We don't see these attacks nearly as severe or frequent on Maven, which is a much older package management solution. Maven users would be far more attractive targets given corporates extensively run Java.
1 reply →
It is also, in my humble but informed opinion, where you will find the least security concious programs, just because of the breadth of it's use and myriad of deployments.
It's the new pragmatic choice for web apps and so it's everyone is using it, from battle hardened teams to total noobs to people who just don't give a shit. It reminds me of Wordpress from 10 years ago, when it was the goto platform for cheap new websites.
Every NPM turd should be run with bubblewrap or a similar sandbox toolkit at least.
So do you expect other supply chain services that also supply juicy targets to be affected? I mean, we live in a bubble here in HN, so not seeing something in the front page doesn't mean it doesn't exist or it doesn't happen, but the feeling is that NPM is particularly more vulnerable than other services, correct me if I'm wrong.
Just spit-balling here, but it seems that the problem is with the pushing to NPM, and distribution from NPM, rather than the concept of NPM. If NPM required some form of cryptographically secure author signing, and didn't distribute un-signed packages, then there is at least a chain of responsibility that can be followed.
It's the entire blase nature of js development in general.
With Javascript, yes, but also with all programming-language package managers and software development culture in general. There's too huge of an attack surface, and virtually no attack mitigation. It's a free for all. These are solvable problems, though. Distros have been doing it the right way for decades, and we could do it even better than that. But being lazy is easier. Until people are forced to improve - or there's some financial incentive - they don't.
This has been brewing for a long time. Maven, CPAN before it.
Maybe some of these systems have better protection from counterfeiting, and probably they all should. But as the number of packages you use goes up, the surface area does too. As a Node developer the… permissiveness of the culture has always concerned me.
The trick with playing with fire is understanding how fire works, respecting it, and keeping the tricks small. The bigger you go, the more the danger.
NPM isn’t perfect but no, it’s fundamentally self inflicted.
Community is very happy to pick up helper libraries and by the time you get all the way up the tree in a react framework you have hundreds or even thousands of packages.
If you’re sensible you can be fine just like any other ecosystem, but limited because one wrong package and you’ve just ballooned your dependency tree by hundreds which lowers the value of the ecosystem.
Node doesn’t have a standard library and until recently not even a test runner which certainly doesn’t help.
If your sensible with node or Deno* you’ll somewhat insulated from all this nonsense.
*Deno has linting,formatting,testing & a standard library which is a massive help (and a permission system so packages can’t do whatever they want)
> is there something fundamentally wrong with NPM?
Its users don't check who the email is from
It's amazing how we attack normies for downloading random software but we will load our projects with hundred of dependencies we don't audit ourselves.
Warning: LLM-generated article, terribly difficult to follow and full of irrelevant details.
Related (7 days ago):
NPM debug and chalk packages compromised (1366 points, 754 comments): https://news.ycombinator.com/item?id=45169657
Related in that this is another, separate, attack on npm.
No direct relation to the specific attack on debug/chalk/error-ex/etc that happened 7 days ago.
The article states that this is the same attackers that got control of the "nx" packages on August 27th, which didn't really get a lot of traction on HN when it happened: https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...
Seems to be a separate incident?
Separate? Yes. Unrelated? Hard to tell.
1 reply →
My main takeaway from all of these is to stop using tokens, and rely on mechanisms like OIDC to reduce the blast radius of a compromise.
How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
> How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
Zero? How many developers have plain-text tokens lying around on disk? Avoiding that been hammered into me from every developer more senior than me since I got involved with professional software development.
You're sure you don't have something lying around in ~/.config ? Until recently the github cli would just save its refresh token as a plain text file. AWS CLI loves to have secrets sitting around in a file https://docs.aws.amazon.com/cli/latest/userguide/cli-configu...
1 reply →
> How many developers have plain-text tokens lying around on disk?
Most of them. Mainly on purpose, (.env files) but many also accidentally. (shell history with tokens in the commands)
1 reply →
Isn't this quite hard to achieve on local systems, where you don't have a CI vault automation to help?
4 replies →
How do you manage secrets for your projects?
10 replies →
A good habit, but encryption won't save you in all cases because anything you run has write access to .bashrc.
Frankly, our desktop OSes are not fit for purpose anymore. It's nuts that everything I run can instantly own my entire user account.
It's the old https://xkcd.com/1200/ . That's from 2013 and what little (Flatpak, etc.) has changed has only changed for end users - not developers.
For years everyone in the programming community has been pushing for convenience and features and code reuse and its got to the point I think the ease of adding a third party package from the languages package manager or github needs to be seriously questioned by security conscious devs. Perhaps we made the wrong things easy.
This is a product of programming languages continuing to ignore the lessons from capability security. The fact that packages in your programming language even have the ability to any of the things listed in these articles by default is an embarrassing, abject failure of our profession.
So, other packaging environments have a tendency to slow down the rate of change that enters the user's system. Partly through the labor of re-packaging other people's software, but also as a deliberate effort. For instance: Ubuntu or RedHat.
Is anyone doing this in a "security as a service" fashion for JavaScript packages? I imagine a kind of package escrow/repository that only serves known secure packages, and actively removes known vulnerable ones.
I've worked in companies that do this internally, e.g., managed pull-through caches implemented via tools like Artifactory, or home-grown "trusted supply chain" automation, i.e., policy enforcement during CI/CD prior to actually consuming a third-party dependency.
But what you describe is an interesting idea I hadn't encountered before! I assume such a thing would have lower adoption within a relatively fast-moving ecosystem like Node.js though.
The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.
Exactly. Everyone is doing this, maybe well, maybe poorly. Consider Sonatype Nexus and its "repository firewall" product. Their business model _depends_ on everyone not cooperating, so there's likely a ton of folks that would love to pay less to get the same results.
> The closest thing I can think of (and this isn't strictly what you described) is reliance on dependabot, snyk, CodeQL, etc which if anything probably contributes to change management fatigue that erodes careful review.
It's not glamorous work, that's for sure. And yes, it would have to rely heavily on automated scanning to close the gap on the absolutely monstrous scale that npmjs.org operates at. Such a team would be the Internet's DevOps in this one specific way, with all the slog and grind that comes with that. But not all heroes wear capes.
> managed pull-through caches implemented via tools like Artifactory
This is why package malware creates news, but enterprises mirroring package registries do not get affected. Building a mirroring solution will be pricey though mainly due to high egress bandwidth cost from Cloud providers.
How does a pull-through cache prevent this issue? Wouldn’t it also just pull the infected version from the upstream registry?
1 reply →
Google has Assured Open Source for Python / Java https://cloud.google.com/security/products/assured-open-sour...
Some other vendors do AI scanning
I doubt anyone would want to touch js packages with manual review.
It would take labor, that's for sure. Manual review of everything JS is just too massive a landscape to cover. Automation is the way to go here, for sure.
I think the bare minimum is heavy use of auditjs (or Snyk, or anything else that works this way), and maybe a mandatory waiting period (2-4 weeks?) before allowing new packages in. That should help wave off the brunt of package churn and give auditjs enough time to catch up to new package vulnerabilities. The key is to not wait too long so folks can address CVE's in their software, but also not be 100% at the bleeding edge.
Languages/VMs should support capability-based permissions for libraries, no library should be able to open a file or do network requests without explicit granular permissions.
post-install seems like it shouldn't be necessary anyway, let alone need shell access. What are legitimate JS packages using this for?
From what I've seen, it's either spam, telemetry, or downloading prebuilt binaries. The first two are anti-user and should not exist, the last one isn't really necessary — swc, esbuild, and typescript-go simply split native versions into separate packages, and install just what your system needs.
Use pnpm and whitelist just what you need. It disables all scripts by default.
Does that even matter?
The malware could have been a JS code injected into the module entry point itself. As soon as you execute something that imports the package (which, you did install for a reason) the code can run.
I don't think that many people sandbox their development environments.
It absolutely matters. Many people install packages for front-end usage which would only be imported in the browser sandbox. Additionally, a package may be installed in a dev environment for inspection/testing before deciding whether to use it in production.
To me it's quite unexpected/scary that installing a package on my dev machine can execute arbitrary code before I ever have a chance to inspect the package to see whether I want to use it.
1 reply →
Most don’t need it. There was a time when most post installing flooded your terminal with annoying messages to upgrade, donate, say hi.
Modern node package managers such as yarn and pnpm allow you to prevent post installs entirely.
Today most of the time you need to make an exception for a package is when a module requires native compilation or download of a pre-built binary. This has become rare though.
I think these compromises show that install hooks should be severely restricted.
Something like, only packages with attestations/signed releases and OIDC-only workflow should allow these scripts.
Worm could propogate through the code itself but I think it would be quite a bit less effective.
Yes, cybersecurity is absolutely a cost center. You can pay for it the easy way, the hard way, or the very hard way. Looks like we're fixing NPM the very hard way.
Anyone know if there is a public events feed/firehouse for npm ecosystem system? Similar to GitHub public events feed?
We, at ClickHouse, love big data and it would be super cool download and analyse patterns of all these data & provide some tooling to help with combatting this wide spread issue.
So glad I left JS for backend last year. It was a big effort switching to a new language and framework (still is) but it looks like so far the decision was worth it.
I'm still looking at Bun and all the effort they're doing with built-in APIs to reduce (and hopefully eliminate) third party deps. I would prefer using TS for the whole stack if possible but not at the expense of an insecure backend ecosystem.
Just curious, what did you switch to?
dotnet
Blog author company's runner detects anomalies in them, but we shouldn't need a product for this.
Detecting outbound network connection during an npm install is quite cheap to implement in 2025. I think it comes down to tenant and incentives, if security is placed as first priority as it should, for any computing service and in particular for supply chain like package management, this would be built in.
One thing that comes to mind that would make it a months long deabte is the potential breakage of many packages. In that case as a first step just make an eye catching summary post install, with gradual push to totally restriction with something like a strict mode, we've done this before.
Which, reminds me of another long standing issue with node ecosystem toolings, information overload. It's easy to bombard devs with thesis character count then blame them for eventually getting fatigue and not reading the output. It takes effort to summarize what's most important with layered expansion of detail level, show some.
"Outbound network connection at npm install" is just one of many ways malware in NPM package can manifest itself.
E.g. malware might be executed when you test code which uses the library, or when you run a dev server, or on a deployed web site.
The entire stack is built around trusting a code, letting it do whatever it wants. That's the problem.
Trust is hard, it all comes down to trust no matter what you do. The more general idea is sandboxed build, it doesn't eliminate all problems but one class.
Jesus Christ. Another one? What the fuck?
This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
This whole mess was foreseeable. So what's to be done?
Look. Any serious project needs to start vendoring its dependencies. People should establish big, coarse grained meta-distributions like C++ Boost that come from a trustable authority and that get updated infrequently enough that you can keep up with release notes.
> This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
For one, NPM has a really sprawling ecosystem where it's normal to have many dependencies.
I remember that I once tried to get started with angular, and I did an "init" for an empty project and "compile", and suddenly had half a gigabyte of code lying in my directory.
This means that there is a high number of dependencies that are potential targets for a supply chain attack.
I just took a look at our biggest JS/Typescript project at work, it comes in at > 1k (recursive) NPM dependencies. Our biggest Python project has 78 recursive dependencies. They are of comparable size in terms of lines of code and total development time.
Why? Differences in culture, as well as python coming with more "batteries included", so there's less need for small dependencies.
> For one, NPM has a really sprawling ecosystem where it's normal to have many dependencies.
Agreed, but it's a difference of degree (literally --- graph in- and out-degree) not kind.
Rust was hit by a similar attempt: https://github.com/rust-lang/crates.io/discussions/11889
Nothing much came of it, I don't know.
> Or Lisp via QuickLisp
Common Lisp is not worth it - you are unlikely to hit any high-value production target, there are not many uses and they are tech-savy. Good for us, the 5 remaining users. Also, Quicklisp is not rolling-release, it is a snapshot done one or two times a year.
They were new versions of the packages instead of modified existing ones so vendoring has the same effect as the usual practice of pinning npm deps and using npm ci, I think.
How many packages now have been compromised over the past couple of weeks? The velocity of these attacks are insane. Part of me believes state actors must be involved at this point.
In any case, does anyone have an exhaustive list of all recently compromised npm packages + versions across the recent attacks? We need to do an exhaustive scan after this news...
> Part of me believes state actors must be involved at this point.
Its less a technical but rather a moral hurdle. Its probably a bunch of teenagers behind it like how it was with the Mirai Botnet.
Just notice guys it did not started with tinycolor. I had first reported it here, I am just not as popular haha
My posts way before the issue was created: https://news.ycombinator.com/item?id=45252940 https://www.linkedin.com/posts/daniel-pereira-b17a27160_i-ne...
I haven't dug into the specifics but technical props and nostalgia to the "self propagating" nature. Reminds of the OG "Worm" - the https://en.wikipedia.org/wiki/Morris_worm
The real key takeaway here is that Microsoft could fix this of they wanted: they have near infinite resources, the best people and more heavily invested in open source than anyone else in the business, but still refuse to even comment on the situation.
I’m not sure language package mangers were a good idea at all. Dependencies were supposed to be painful. If the language needed some functionality built in it was supposed to go into the standard library, I understand that for JS this isn’t feasible.
It is not package managers. It is due to the poor NPM ecosystem: lots of crappy packages (like left-pad), auto updates, lots of dependencies, post install scripts, insecure language.
These security problems happen much less often in other ecosystems. There is nothing even remotely as bad as NPM.
Nah, package managers are always the "civilization" moments of programming.
There was a very similar discussion on lobsters the other day. You might be interested in reading it.
In general, I agree with the idea that writing everything yourself results in a higher quantity of low quality software with security issues and bugs, as well as a waste of developers' time. That said, clearly supply chain attacks are a very real threat that needs to be addressed. I just don't think eliminating package managers is a good solution.
https://lobste.rs/s/zvdtdn
Ironically I started seeing a message in GitHub saying 2fa will be auto-enforced shortly. Wonder if that is a sign of similar for npm packaging?
Or wonder if GitHub is enforcing 2fa soon because of the NPM CVEs potential to harvest GitHub creds?
2FA is the first steps is stopping the onslaught.
But it still doesn't stop infected developer machines to silently update code and wait for the next release patiently.
It would require the diligence of those developers to check every line of code that goes out with a release... which is a lot to ask for someone who fell for a fishing email.
Someone should eradicate the npm ecosystem and start from scratch. No sane package manager would allow to run arbitrary scripts or download stuff from God knows where, like random github repos.
npm is now a private company right? It does also look like they have already gone through enshittification and don't even seem to have publicly acknowledged this attack.
I guess it's still spreading? those blogs seem to list differences packages
Perhaps set `minimumReleaseAge > 1440` in pnpm config until this is fixed.
Code signing, 2FA, and reducing dependencies are all incomplete solutions. What we need is fine-grained sandboxing, down to the function and type level. You will always be vulnerable as long as you're relying on fallible humans (even yourself) to catch or prevent vulnerabilities.
Apparently they've tried to implement this in JavaScript but the language is generally too flexible to resist a malicious package running in the same process.
We need to be using different languages with runtimes that don't allow privileged operations by default.
That doesn’t solve it either. If you need to grant hundreds of permissions, people will just hand-wave them all—remember the UAC debacle in Windows Vista? I like Denos approach way better; and you could also ask why any application can just read files in your home folder, or make network requests to external hosts. OSes really are part of the equation here.
My problem is that, in the JS ecosystem, every single time you go through a CI/CD pipeline, you redownload everything. We should only download the first time and with the versions that are known to work. When we make a manual update to version, than only that should be downloaded once more.
I just checked one of our repos right now and it has a 981 packages. It's not even realistic to vet the packages or to know which one is compromised. 99% of them are dependencies of dependencies. Where do we even get started?
Redownloading everything isn’t a risk when the lock file contains a hash of the download on first update.
Isn't that what lockfiles are for? By default `npm i` downloads exactly the versions specified in your lockfile, and only resolves the latest versions matching the ranges specified in package.json if no lockfile exists. But CI/CD pipelines should definitely be using `npm ci` instead, which will only install packages from a lockfile and throws an error if it doesn't exist.
That and pin that damn version!
3 replies →
Generate builder images and reuse them. It shaves minutes off each CI job with projects I'm working on and is non-optional because we're far from all major datacenters.
Or setup a caching proxy, whatever is easier for your org. I've had good experience with nexus previously, it's pretty heavy but very configurable, can introduce delays for new versions and check public vulnerability databases for you.
It's purely an efficiency problem though, nothing to do with security, which is covered by lock files.
Related:
Active NPM supply chain attack: Tinycolor and 40 Packages Compromised
https://news.ycombinator.com/item?id=45256210
(Additional context for Mr. Architect's Link Pasta: 33 comments, 1 day ago)
I was just reading an article in Foreign Affairs that was discussing a possible future with an increased separation of science and technological developments between China and The West. And it occurred to me, what would such a siloed landscape mean for OSS and basically the whole web infrastructure as it is today, shared and open for anyone in any country. I think this kind of malware becoming pervasive could be the failure state if this future becomes reality.
I always thought open source in a purely profit driven society was always a bit contradictory, but it's like the wikipedia. There is just something innate in people that makes them care for their craftsmanship and their community with zero profit incentive, despite the prevailing ideology telling us that it ought to be impossible and surely about to collapse any moment now. OSS will prevail no matter Microsoft's disastrous and irresponsible stewardship of a smallish portion of it.
May I ask which article it was? The Once and Future China?
Thats the one
Maybe stupid question here. And forgive my ignorance.
But does yarn or deno suffer from the same issues? That is do they get their packages from npm repositories? I've never used these.
Yes.
NPM needs some kind of attestation mechanism. There needs to be an independent third party that that has the fingerprint, and then npm must verify it before a change is published. It could even be just DNS or well-known URI that, if changed, triggers lockdown. Then, even in the case of a successful compromise of an NPM account or source control, whether via phishing like the last one or token exfiltration like this one, it will remain unpublished.
At this time should we just consider all of npm unsafe for installing new packages? Installing a single package could install hundreds of transient dependencies.
Yes. Also, no need for "at this time".
What's stopping supply chain attacks like this from happening in other languages like Python, or even in source repos via compromised forge accounts like Github? Artifact/commit signing is optional, so while 2FA fortunately is becoming mandatory, if the maintainer never used signing then this could happen to PyPI just as well as NPM, no? Or is NPM uniquely vulnerable for some reason?
For a large subset of packages (like the browser ones), as a layman, it seems feasible to do static analysis for:
1) fetch calls
2) obfuscation (like sketchy lookup tables and hex string construction)
Like for (1) the hostname should be statically resolvable and immutable. So you can list the hostnames it fetches from as well.
Is this feasible or am I underestimating the difficulty? Javascript seems to have no shortage of static analysis tools.
There are many ways to "eval" in javascript, and static analysis can only work if that's also statically disallowed.
Unfortunately, eval is still used in a lot of code, so disabling it isn't trivially viable, and with eval present, detecting fetch calls and such statically becomes the halting problem.
This seems like a great opportunity for someone to push a smaller but fully audited subset of the npm repos.
Corporations would love it.
Unless npm infrastructure will be thoroughly curated and moderated, it always going to stay a high risk threat.
Wow it got everything, aws keys, gcp keys, github tokens, thats a lot of cryptocoin mining instances that are going to be spun up. And a lot of unexpected bills people are going to be getting...
They really shouldn't have been stored unencrypted on peoples machines.... Ouch.
The number of packages is now up to 180 (or more, depending on which source you're looking at)
For those of you without a security dept, hopefully this is of some help: https://github.com/Cobenian/shai-hulud-detect
more updates soon and PRs welcome.
Soon we'll see services like, havemysecretsbeenpwned.com check it against with your secrets xD given the malw seeks local creds.
To my experience 80% of companies do not care about their secrets will/being exposed.
There is this shallow belief that production will never be hacked
Each one of these posts makes me feel better about having no dependencies on my current project, other than Gulp, which I could replace if I had to.
But also I miss having things like spare time, and sleep, so perhaps the tradeoff wasn't worth it
It's high time we took this seriously and required signing and 2FA on all publishes to NPM and NPM needs to start doing security scanning and tooling for this that they can charge organisations for.
Are Python packaging systems like pip exposed to the same risks?
Is anybody looking at this?
As much as I prefer Python over JavaScript, Python is vulnerable to this sort of attack. All it would take is a compromised update publishing only a source package, and hooking into any of setuptools's build or install steps. Pip's build isolation is only intended for reproducible builds. It's not intended to protect against malicious code.
PyPI's attestations do nothing to prevent this either. A package built from a compromised repository will be happily attested with malicious code. To my knowledge wheels are not required.
Not to the same extent as NPM. Because Python has a good standard library and library authors are not deathly afraid of code duplication like JS devs, for example micro libraries like left-pad, is-even etc.
Also there’s more of a habit to release to the pre release channel for some time first.
I honestly think a forced time spent in pre release (with some emergency break glass where community leaders manually review critical hotfixes) could mitigate 99% of the issues here. Linux packages have been around for ever and have fewer incidents mainly because of the long dev->release channel cooking time.
1 reply →
The weird dig at JS as a community is wholly unnecessary. Python as an ecosystem is just as vulnerable to this crap - and they’ve had their own issues with it.
You can reference that and leave the color commentary at the door.
1 reply →
Software supply chain attacks are well known and they are a massive hole in the entirety of software infrastructure. As usual with security, no one really cares that much.
As a developer, is there a way on mac to limit npm file access to the specific project? So that if you install a compromised package it cannot access any data outside of your project directory?
Wrote a small utility shell script that uses docker behind the scenes to prevent access to your host machine while still allowing full npm install and run workflow.
https://github.com/freakynit/simple-npm-sandbox
Disclaimer: I am not Docker expert. Please review the script (sandbox.js) and raise any potential issues or suggestions.
Thanks..
You can run nodejs through `sandbox-exec` which is part of macos.
I've never tried any of them but there's also a few wrappers specifically to do that, such as: https://github.com/berstend/node-safe
Otherwise you're down to docker or virtualisation or creating one system user per project...
Frankly, I am refusing to use npm outside of docker anymore.
Why did the socket.dev story from last night get flagged off the front page?
https://news.ycombinator.com/item?id=45256210
What indicates to you that it has been flagged?
I'm surprised this is happening now, and not 10 years ago.
We're seeing it now...
NPM gets a lot of traffic, there might be other package managers out there, in different languages, that may have been infected in the past and simply don't get the same amount of eyeballs.
Is there a link to a script to check if your project is affected ?
Or is "yarn audit" enough ?
(Of course we would not pipe the link to a shell, and we would read it beforehand :D )
At least they're not re-inventing the wheel though!!
npm considered harmful
Does anyone know when @ctrl/tinycolor 4.1.1 was released exactly? Trying to figure out the infection timeline relative to my tools.
Never mind, got it:
This blog post and others are from 'security saas' that also try to make money off how bad NPM package security safety is.
Why can't npm maintainers just implement something similar?
Maybe at least have a default setting (or an option) that packages newer than X days are never automatically installed unless forced? That would at least give time for people to review and notice if the package has been compromised.
Also, there really needs to be a standard library or at least a central community approved library of safe packages for all standard stuff.
Would strict containerization help here? (rootless, read-only partial fs access, only the necessary env variables passed, etc)
I wouldn't mind a simple touch id (or password) requirement every time I run `npm publish` to help prevent such an attack.
This seems like something that can be solved with reproducible builds and ensuring you only deploy from a CI system that verifies along the way.
In fact this blog post appears to be advertising for a system that secures build pipelines.
Google has written up some about their internal approach here: https://cloud.google.com/docs/security/binary-authorization-...
With repos and workflows being infected, wouldn't a CI-only deploy not help?
The malware is modifying files and adding github workflows. If your builds are reproducible and run from committed code then the only way to add the post install script is if the maintainer reviews and accepts the commit that adds it. Similarly with the github workflow branch.
And if your CI is building and releasing in a sandboxed hermetic environment, then the sandboxes that build and release don't need credentials like AWS_ACCESS_KEY because they can't depend on data from the network. You need credentials for deploying and signing, but they don't need to be present during build time.
2 replies →
Is there a theoretical framework that can prevent this from happening? Proof-carrying code?
Object-capability model / capability-based security.
Do not let code to have access to things it's not supposed to access.
It's actually that simple. If you implemented a function which formats a string, it should not have access to `readFile`, for example.
Retrofitting it into JS isn't possible, though, as language is way too dynamic - self-modifying code, reflection, etc, means there's no isolation between modules.
In a language which is less dynamic it might be as easy as making a white-list for imports.
People have tried this, but in practice it's quite hard to do because then you have to start treating individual functions as security boundaries - if you can't readFile, just find a function which does it for you.
The situation gets better in monadic environments (can't readFile without the IO monad, and you cant' call anything which would read it).
2 replies →
You can protect yourself using existing tools, but it's not trivial and requires serious custom work. Effectively you want minimal permissions and loud failures.
This is something I'm trying to polish for my system now, but the idea is: yarn (and bundler and others) needs to talk only to the repositories. That means yarn install is only allowed outbound connections to localhost running a proxy for packages. It can only write in tmp, its caches, and the current project's node_packages. It cannot read home files beyond specified ones (like .yarnrc). The alias to yarn strips the cloud credentials. All tokens used for installation are read-only. Then you have to do the same for the projects themselves.
On Linux, selinux can do this. On Mac, you have to fight a long battle with sandbox-exec, but it's kinda maybe working. (If it gained "allow exec with specified profile", it would be so much better)
But you may have guessed from the description so far - it's all very environment dependent, time sink-y, and often annoying. It will explode on issues though - try to touch ~/.aws/credentials for example and yarn will get killed and reported - which is exactly what we want.
But internally? The whole environment would have to be redone from scratch. Right now package installation will run any code it wants. It will compile extensions with gyp which is another way of custom code running. The whole system relies on arbitrary code execution and hopes it's secure. (It will never be) Capabilities are a fun idea, but would have to be seriously improved and scoped to work here.
Why yarn instead of pnpm?
1 reply →
Something similar to Deno's permission system, but operating at a package level instead of a process level.
When declaring dependencies, you'd also declare the permissions of those dependencies. So a package like `tinycolor` would never need network or disk access.
Does Deno's sandboxing not extend to build time?
There are, but they have huge performance or usability penalties.
Stuff like intents "this is a math library, it is not allowed to access the network or filesystem".
At a higher level, you have app sandboxing, like on phones or Apple/Windows store. Sandboxed desktop apps are quite hated by developers - my app should be allowed to do whatever the fuck it wants.
Do they actually have huge performance penalties in Javascript?
I would have thought it wouldn't be too hard to design a capability system in JS. I bet someone has done it already.
Of course, it's not going to be compatible with any existing JS libraries. That's the problem.
You can do that by screening module imports with zero runtime penalty.
Probably signatures could alleviate most of these issues, as each publish would require the author to actually sign the artifact, and setup properly with hardware keys, this sort of malware couldn't spread. The NPM CI tokens that don't require 2fa kind of makes it less useful though.
Clojars (run by volunteers AFAIK) been doing signatures since forever, not sure why it's so difficult for Microsoft to follow their own yearly proclamation of "security is our top concern".
I would like to see more usage of NPM/Github Actions provenance statements https://www.npmjs.com/package/sigstore#provenance through the ecosystem
> The NPM CI tokens that don't require 2fa kind of makes it less useful though
Use OIDC to publish packages instead of having tokens around that can be stolen or leaked https://docs.npmjs.com/trusted-publishers
Yeah, LavaMoat (https://github.com/LavaMoat/LavaMoat). Dependencies are pinned with capabilities, and scripts are disabled by default.
Manual verification of releases and chain-of-trust systems help a lot. See for example https://lucumr.pocoo.org/2019/7/29/dependency-scaling/
I think I’m gonna start wrapping most of my build commands (which are inside a makefile) with bwrap. Allow network access only for fetching dependencies. Limit disk access (especially rw access) throughout. Etc.
If anyone is interested, I've added this BWRAP_BUILD variable to a makefile in my project that builds a Go and SvelteKit project. I then preface individual commands that I want sandboxed within a make target (e.g. mybin below).
Notes: most of the lines after --setenv GOPATH... are specific to my project and tooling. Some of the lines prior are specifically to accommodate my tooling, but I think that stuff should be reasonably general. Lmk if anyone has any suggestions.
Is using any type of NPM type stuff a no go? Who reads the code and verifies is secure?
Other than the maintainer (which isn't of course guaranteed) no-one other than it being incumbent on userland deployment, and those deploying a lib into a project to review the code.
Why cant we hash libraries and trusted parties publish security checked hashes?
Isn’t this a good case for LLMs? Audit at compile time all of the dependencies?
Please no, see
> Using CVE reports as a weapon
https://www.youtube.com/watch?v=GDdlRiThDeg
Oh you took it further, let the LLM take the wheel. I was just referring to the LLM raising a red flag during compilation. So worst case scenario it will just raise a false positive.
I had just seen some guy on TikTok pushing `mcp-knowledge-graph` the other day
You guys win, JS is actually fantastic and this headline is great.
New Npm vuln? Other shocking news: today is Tuesday
> It deliberately skips Windows systems
Reminds me of when I went to a tech conference with a Windows laptop and counted exactly two like me among the hundreds of attendees. I was embarrassed then but I'd be laughing now :D
..for now. Safer to assume there was a todo in the code and not some anti-Linux agenda.
Time for a Butlerian Jihad.
Be accountable for what deploy if paid, except explicit other deal. Like in real jobs. Next...
None of these were paid
Need to stop using javascript on desktop ASAP. Also Rust might be a bit dangerous now?
NPM belongs to Microsoft. What kind of security do you expect?
You're saying this as if big corps, banks, etc weren't using dotnet.
They should not. Microsoft lost the master key to the Azure cloud in 2023. Just now they have this issue: https://www.wired.com/story/microsoft-entra-id-vulnerability...
MS stands for minimal security.
Shai-hulud huh?
Maybe related to the Russian hacker group Sandworm?
https://en.wikipedia.org/wiki/Sandworm_(hacker_group)
npm should be banned and illegal to work with.
The same could be said of quite a few equivalent in other programing languages.
I have been telling people for ages - Javascript is a pack of cards. We have progressed as a society and have so many alternatives for everything, and yet, we still haven't done anything about Javascript being forced down onto us by browsers. If it wasn't for web browsers, JS would have become irrelevant so fast because of how broken it is - both as a language and its ecosystem.
On the contrary - almost being a decade into Elixir - most of the time, I don't need (and I don't like) using external dependencies. I can just write something myself in a matter of just an hour or so because it's just so easy to do it myself. And everything I've written till date hasn't required an audit or re-write every 6 months or sometimes, even for years.
We all seem to hate the concept of Nazis and yet somehow we have done nothing about the Nazi-est language of them all which literally has no other alternatives to run on web browsers?
[dead]
New day, new npm malware. Sigh..
> New day, new npm malware. Sigh..
This. But the problem seems to go way deeper than npm or whatever package manager is used. I mean, why is anyone consuming a package like colors or tinycolors? Do projects really need to drag in a random dependency to handle these usecases?
So rather than focusing on how Microsoft/npm et al can prevent similar situations in the future, you chose to think about what relevance/importance each individual package has?
There will always be packages that for some people are "but why?" but for others are "thank god I don't have to deal with that myself". Sure, colors and whatnot are tiny packages we probably could do without, but what are you really suggesting here? Someone sits and reviews every published package and rejects it if the package doesn't fit your ideal?
2 replies →
Why are people using React to write simple ecommerces?
Why are React devs pulling object utils from lodash instead of reimplementing them?
4 replies →
It used to be a front end framework everyday
[dead]
Another day, another npm compromise
Time to add developer ID's verification /s
[flagged]
Bless the maker and his water.
My comment yesterday, which received one downvote and which I will repeat if/until they’re gone: HTTP and JS have to go. There are ways to replace them.
One downvote is not enough.
One upvote is not enough. We need enough upvotes to fix the problem. You can’t shape a big pile of shit into success. HTTP and JS will never serve as a proper application framework.
10 replies →
HTTP?
We have good protocols for sharing programs. HTTP was designed to share stylized documents which it’s OK at. The browser probably should have stuck to rendering and left the p2p file sharing to a better protocol. It absolutely is not fit for the problem domain its been shoehorned into nor does it need to serve that role.
2 replies →