NPM debug and chalk packages compromised

3 months ago (aikido.dev)

https://github.com/advisories/GHSA-8mgj-vmr8-frr6

Hi, yep I got pwned. Sorry everyone, very embarrassing.

More info:

- https://github.com/chalk/chalk/issues/656

- https://github.com/debug-js/debug/issues/1005#issuecomment-3...

Affected packages (at least the ones I know of):

- ansi-styles@6.2.2

- debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)

- chalk@5.6.1

- supports-color@10.2.1

- strip-ansi@7.1.1

- ansi-regex@6.2.1

- wrap-ansi@9.0.1

- color-convert@3.1.1

- color-name@2.0.1

- is-arrayish@0.3.3

- slice-ansi@7.1.1

- color@5.0.1

- color-string@2.1.1

- simple-swizzle@0.2.3

- supports-hyperlinks@4.1.1

- has-ansi@6.0.1

- chalk-template@1.1.1

- backslash@0.2.1

It looks and feels a bit like a targeted attack.

Will try to keep this comment updated as long as I can before the edit expires.

---

Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).

NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.

Email came from support at npmjs dot help.

Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).

Just NPM is affected. Updates to be posted to the `/debug-js` link above.

Again, I'm so sorry.

  • Just want to agree with everyone who is thanking you for owning up (and so quickly). Got phished once while drunk in college (a long time ago), could have been anyone. NPM being slowish to get back to you is a bit surprising, though. Seems like that would only make attacks more lucrative.

    • Can happen to anyone… who doesn’t use password manager autofill and unphishable 2FA like passkeys.

      Most people who get phished aren’t using password managers, or they would notice that the autofill doesn’t work because the domain is wrong.

      Additionally, TOTP 2FA (numeric codes) are phishable; stop using them when U2F/WebAuthn/passkeys are available.

      I have never been phished because I follow best practices. Most people don’t.

      24 replies →

  • Hey, no problem, man. You do a lot for the community, and it's not all your fault. We learn from our mistakes. I was thinking of having a public fake profile to avoid this type of attack, but I'm not sure how it would work on the git tracking capabilities. Probably keeo it only internally for you&NPM ( the real one ) and have some fake ones open for public but not sure, just an obfuscated idea. Thanks for taking the responsibility and working in fixing ASAP. God bless you.

    • Unfortunately wouldn't have helped. They skimmed my npm-only address directly from the public endpoint.

    • Wow, that's actually kinda genius not gonna lie. Honestly, I would love seeing some 2fa or some other way to prevent pwning. Maybe having a sign up with google with all of its flaws still might make sense given how it might be 2fa.

      But google comes with its own privacy nightmares.

  • Tbh, it's not your fault per se; everybody can fall for phishing emails. The issue, IMO, lies with npmjs which publishes to everyone all at the same time. A delayed publish that allows parties like Aikido and co to scan for suspicious package uploads first (e.g. big changes in patch releases, obfuscated code, code that intercepts HTTP calls, etc), and a direct flagging system at NPM and / or Github would already be an improvement.

  • Thanks for sounding the alarm. I've sent an abuse email to porkbun to hopefully get the domain taken down.

    • Thank you, I appreciate it! I did so as well and even called their support line to have them escalate it. Hopefully they'll treat this as an urgent thing; I'd imagine I'm far from the only one getting these.

      1 reply →

  • Yo, someone at npm needs to unpublish simple-swizzle@0.2.3 IMMEDIATELY. It’s still actively compromised.

    • It's been almost two hours without a single email back from npm. I am sitting here struggling to figure out what to do to fix any of this. The packages that have Sindre as a co-publisher have been published over but even he isn't able to yank the malicious versions AFAIU.

      If there's any ideas on what I should be doing, I'm all ears.

      EDIT: I've heard back, they said they're aware and are on it, but no further details.

      4 replies →

  • Thank you for your service.

    Please take care and see this as things that happen and not your own personal failure.

  • Hey, you're doing an exemplary response, transparent and fast, in what must be a very stressful situation!

    I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:

    TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.

    I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.

    What really helps against phishing :

    1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.

    2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.

    That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.

    Good luck and well done again on the response!

    • > NEVER EVER login from an email link. EVER

      Login using one off email links (instead of username + password) is increasingly common which means its the only option.

      8 replies →

    • Or you know, get a password manager like the rest of us. If your password manager doesn't show the usual autofill, since the domain is different than it should, take a step back and validate everything before moving on.

      Have the TOTP in the same/another password manager (after considering the tradeoffs) and that can also not be entered unless the domain is right :)

      22 replies →

  • Absolutely best response here.

    Folks from multi-billion dollar companies with multimillion dollar packages should learn a few things from this response.

  • Thank you for the swift and candid response, this has to suck. :/

    > The author appears to have deleted most of the compromised package before losing access to his account. At the time of writing, the package simple-swizzle is still compromised.

    Is this quote from TFA incorrect, since npm hasn’t yanked anything yet?

    • Quote is probably added recently. Not entirely correct as I have not regained access; nothing happening to the packages is of my own doing.

      npm does appear to have yanked a few, slowly, but I still don't have any insight as to what they're doing exactly.

  • The fact that NPMs entire ecosystem relies on this not happening regularly is very scary.

    I’m extremely security conscious and that phishing email could have easily gotten me. All it takes is one slip up. Tired, stressed, distracted. Bokm, compromised

  • I hate that kind of email when sent out legitimately. Google does this crap all the time pretty much conditioning their customers to click those links. And if you're really lucky it's from some subdomain they never bothered advertising as legit.

    Great of you to own up to it.

    • Atlassian and MS are terrible for making email notifications that are really hard to distinguish from phishing emails. Using hard to identify undocumented random domains in long redirect chains, obfuscating links etc etc.

    • I’ve started ignoring these types of emails and wait to do any sort of credentials reset until I get an alert when I log in (or try to) for just this reason.

  • I am not very sophisticated npm user on MacOS, but I installed bunch of packages for Claude Code development. How do we check if computer has a problem?

    Do we just run:

    npm list -g #for global installs

    npm list #for local installs

    And check if any packages appear that are on the above list?

    Thanks!

    • How I do it is, run npm list --all then check the completely dependency tree to find out if anywhere I am using the vulnerable package.

  • > Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).

    Does anyone know how this attack works? Is it a CSRF against npmjs.com?

    • That was the low-tech part of their attack, and was my fault - both for clicking on it and for my phrasing.

      It wasn't a single-click attack, sorry for the confusion. I logged into their fake site with a TOTP code.

      2 replies →

    • Fake site.

      You login with your credentials, the attacker logins to the real site.

      You get an SMS with a one time code from the real site and input it to the fake site.

      The attacker takes the code andc finishes the login to the real site.

  • Hey, new dev here. Sorry if this is a common knowledge and I am asking a stupid question. How does you getting phished affect these NPM packages? aren't these handled by NPM or the developers of them?

    • The guy is actually the maintainer of those packages. So whoever got his credentials became able to perform releases on those packages. NPM itself does not build any package, it's just a place where people can publish stuff

    • OP is the developer & maintainer of the affected packages, so the attacker was able to use their phished credentials to upload compromised versions to NPM.

      1 reply →

  • Thanks for leaving a transparent response with what happened, how you responded, what you're doing next, and concisely taking accountability Great work!

  • I'm sorry that you're having to go through this. Good luck sorting out your account access.

    I actually got hit by something that sounds very similar back in July. I was saved by my DNS settings where "npNjs dot com" wound up on a blocklist. I might be paranoid, but it felt targeted and was of a higher level of believability than I'd seen before.

    I also more recently received another email asking for an academic interview about "understanding why popular packages wouldn't have been published in a while" that felt like elicitation or an attempt to get publishing access.

    Sadly both of the original emails are now deleted so I don't have the exact details anymore, but stay safe out there everyone.

  • Thanks for your response. But this does call for preventing a single point of failure for security.

  • maybe you should work with feross to make a website-api that simply gives you a "true/false" on "can I safely update my dependencies right now" that gives an outofband way to mark the current or all versions thereof, of compromised packages.

  • mistakes happen. owning them doesn't always happen, so well done.

    phishing is too easy. so easy that I don't think the completely unchecked growth of ecosystems like NPM can continue. metastasis is not healthy. there are too many maintainers writing too many packages that too many others rely on.

  • Sorry to be dumb, but can you expand a bit on "2FA reset email..." so the rest of us know what not to do?

    • Ignore anything coming from npm you didn't expect. Don't click links, go to the website directly and address it there. That's what I should have done, and didn't because I was in a rush.

      Don't do security things when you're not fully awake, too. Lesson learned.

      The email was a "2FA update" email telling me it's been 12 months since I updated 2FA. That should have been a red flag but I've seen similarly dumb things coming from well-intentioned sites before. Since npm has historically been in contact about new security enhancements, this didn't smell particularly unbelievable to my nose.

      The email went to the npm-specific inbox, which is another way I can verify them. That address can be queried publicly but I don't generally count on spammers to find that one but instead look at git addresses etc

      The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.

      The actual in-email link matched what I'd expect on npm's actual site, too.

      I'm still trying to work out exactly how they got access. They didn't technically get a real 2FA code from the actual, I don't believe. EDIT: Yeah they did, nevermind. Was a TOTP proxy attack, or whatever you'd call it.

      Will post a post-mortem when everything is said and done.

      37 replies →

    • > so the rest of us know what not to do?

      Can't really tell you what not to do, but if you're not already using a password manager so you can easily avoid phishing scams, I really recommend you to look into starting doing so.

      In the case of this attack, if you had a password manager and ended up on a domain that looks like the real one, but isn't, you'd notice something is amiss when your password manager cannot find any existing passwords for the current website, and then you'd take a really close look at the domain to confirm before moving forward.

      9 replies →

  • man. anyone and everyone can get fished in a targeted attack. good luck on the cleanup and thanks for being forward about it.

    want to stress everyone it can happen to. no one has perfect opsec or tradecraft as a 1 man show. its simply not possible. only luck gets one through and that often enough runs out.

  • Not your fault. Thanks for posting and being proactive about fixing the problem. It could happen to anyone.

    And because it could happen to anyone that we should be doing a better job using AI models for defense. If ordinary people reading a link target URL can see it as suspicious, a model probably can too. We should be plumbing all our emails through privacy-preserving models to detect things like this. The old family of vulnerability scanners isn't working.

One of the most insidious parts of this malware's payload, which isn't getting enough attention, is how it chooses the replacement wallet address. It doesn't just pick one at random from its list.

It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.

This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.

We did a full deobfuscation of the payload and analyzed this specific function. Wrote up the details here for anyone interested: https://jdstaerk.substack.com/p/we-just-found-malicious-code...

Stay safe!

  • I'm a little confused on one of the excerpts from your article.

    > Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3

    As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx

    And with that, packages usually shouldn't be getting updated in your CI pipeline.

    Am I mistaken on how npm(/yarn/pnpm) lockfiles work?

    • Not the parent, but the default `npm install` / `yarn install` builds will ignore the lock file unless everything can be satisfied, if you want the lock file to be respected you must use `npm ci` / `yarn install --frozen-lockfile`.

      In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.

      16 replies →

    • As others have noted, npm install can/will change your lockfile as it installs, and one caveat for the clean-install command they provide is that it is SLOW, since it deletes the entire node_modules directory. Lots of people have complained but they have done nothing: https://github.com/npm/cli/issues/564

      The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.

      1 reply →

    • You’re right and the excerpt you quoted was poorly worded and confusing. A lockfile is designed to do exactly what you said.

      The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.

      That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!

      1 reply →

  • We should be displaying hashes in a color scheme determined by the hash (foreground/background colors for each character determined by a hash of the hash, salted by that character's index, adjusted to ensure sufficient contrast).

    That way it's much harder to make one hash look like another.

    • As someone with red/green vision deficiency: if you do this, please don’t forget people like me are unable to distinguish many shades of colours, which would be very disadvantageous here!

      10 replies →

    • Not sure why you're being downvoted, OpenSSH implemented randomart which gives you a little ascii "picture" of your key to make it easier for humans to validate. I have no idea if your scheme for producing keyart would work but it sounds like it would make a color "barcode".

      2 replies →

  • > This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit ...

    I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.

    To be honest, this whole post has the ring of AI writing, not careful analysis.

    • > To be honest, this whole post has the ring of AI writing, not careful analysis.

      It has been what, hours? since the discovery? Are you expecting them to spend time analysing it instead of announcing it?

      Also, nearly everyone has AI editing content these days. It doesn’t mean it wasn’t written by a human.

      1 reply →

    • I've been thinking about using Levenshtein to make hexadecimal strings look more similar. Levenshtein might be useful for correcting typos, but not so when comparing hashes (specifically the start or end sections of it). Kinda odd.

Here we are again. 12 days ago (https://news.ycombinator.com/item?id=45039764) I commented how a similar compromise of Nx was totally preventable.

Again, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them.

Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to.

These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now.

  • For a package with thousands of downloads a week, does the publishing pace need to be so fast? New version could be uploaded to NPM, then perhaps a notification email to the maintainer saying it will go live on XX date and click here to cancel?

    • A standard release process for Linux distro packages is 1) submitting a new revision, 2) having it approved by a repository maintainer, 3) it cooks a while in unstable, 4) then in testing, and finally 5) is released as stable. So there's an approval process, a testing phase, and finally a release. And since it's impossible for people to upload a brand new package into a package repository without this process, typosquatting never happens.

      Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to)

      Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work).

      2 replies →

  • A lot of these security measures have trade offs, particularly when we start looking at heuristics or attestation-like controls.

    These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software.

    Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that.

    • I have a hard time arguing that 2FA isn't a massive win in almost every circumstance. Having a "confirm that you have uploaded a new package" thing as the default seems good! Someone like npm mandating that a human being presses a button with a recaptcha for any package downloaded by more than X times per week just feels almost mandatory at this point.

      The attacks are still possible, but they're not going to be nearly as easy here.

      5 replies →

  • > Somebody has to just implement the standard security measures that prevents these compromises.

    I don't disagree, but this sentence is doing a lot of heavy lifting. See also "draw the rest of the owl".

    • We are engineers. Much like an artist could draw the rest of the owl, it’s not an unreasonable ask towards a field that each day seems to grow more accustomed to the learned helplessness.

    • Part of the owl can be how consumers upgrade. Don't get the latest patches but keep things up to date. Secondary sources of information about good versions to upgrade to and when. Allows time for vulns to be discovered like this before upgrading. Assumption is people can detect vulns before mass of people installing, which I think is true. Then you just need exceptions for critical security fixes.

  • > Somebody has to just implement the standard security measures that prevents these compromises.

    It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link.

    So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability.

    You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy.

  • People focus on attacking windows because there are more windows users. What if I told you the world now has a lot more people involved in programming with JavaScript and Python?

    You’re right, this will only get a lot worse.

NPM deserves some blame here, IMO. Countless third party intel feeds and security startups can apparently detect this malicious activity, yet NPM, the single source of truth for these packages, with access to literally every data event and security signal, can't seem to stop falling victim to this type of attack? It's practically willful ignorance at this point.

  • NPM is owned by GitHub and therefore Microsoft, who is too busy putting in Copilot into apps that have 0 reason to have any form of generative AI in them

    • But Github does loads of things with security, including reporting compromised NPM packages. I didn't know NPM is owned by Microsoft these days though, now that I think about it, Microsoft of all parties should be right on top of this supply chain attack vector - they've been burned hard by security issues for decades, especially in the mid to late 90's, early 2000s as hundreds of millions of devices were connected to the internet, but their OS wasn't ready for it yet.

    • Just write a check.md instruction for copilot to check it for malicious acticity, problem solved

    • Is it really owned and run by Microsoft? I thought they only provide infrastructure, servers and funding.

  • For packages which have multiple maintainers, they should at least offer the option to require another maintainer to approve each publish.

  • Why would NPM do anything about it? NPM has been a great source of distributing malware for like a decade now, and none of you have stopped using it.

    Why in the world would they NEED to stop? It apparently doesn't harm their "business"

    • Dozens of businesses have been built to try fixing the npm security problem. There's clearly money in it, even if MS were to charge an access fee for security features.

  • Identical, highly obfuscated (and thus suspicious looking) payload was inserted into 22+ packages from the same author (many dormant for a while) simultaneously and published.

    What kind of crazy AI could possible have noticed that on the NPM side?

    This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.

    For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).

  • I blame the prevalence of package mangers in the first place. Never liked em, just for this reason. Things were fine before they became mainstream. Another annoying reason is package files that are set to grab the latest version, randomly breaking your environment. This isn't just npm of course, I hate them all equally.

    • I'm a little confused, is this rage bait or what?

      > Things were fine before they became mainstream

      As in, things were fine before we had commonplace tooling to fetch third party software?

      > package files that are set to grab the latest version

      The three primary Node.js package managers all create a lockfile by default.

      2 replies →

From sindresorhus:

You can run the following to check if you have the malware in your dependency tree:

`rg -u --max-columns=80 _0x112fa8`

Requires ripgrep:

`brew install rg`

https://github.com/chalk/chalk/issues/656#issuecomment-32668...

  • Asking people to run random install scripts just feels very out of place given the context.

    • I would agree if this were one of those `curl | sh` scenarios, but don't we consider things like `brew` to be sufficiently low-risk, akin to `apt`, `dnf`, and the like?

      22 replies →

    • ripgrep is quite well known. It’s not some obscure tool. Brew is a well-established package manager.

      (I get that the same can be said for said for npm and the packages in question, but I don’t really see how the context of the thread matters in this case).

  • Try the same recursive grep on ~/.npm to see if you have it cached too. Not just the latest in the current project.

    • Haven't installed any modules today, but I ran these commands to clear caches for npm and pnpm just to be safe.

      npm cache clean --force pnpm cache delete

      1 reply →

  • Here's something I generated in my coding AI for Powershell:

    `Get-ChildItem -Recurse | Select-String -Pattern '_0x112fa8' | ForEach-Object { $_.Line.Substring(0, [Math]::Min(80, $_.Line.Length)) }`

    Breakdown of the Command:

    - Get-ChildItem -Recurse: This command retrieves all files in the current directory and its subdirectories.

    - Select-String -Pattern '_0x112fa8': This searches for the specified pattern in the files.

    - ForEach-Object { ... }: This processes each match found.

    - Substring(0, [Math]::Min(80, $_.Line.Length)): This limits the output to a maximum of 80 characters per line.

    ---

    Hopefully this should work for Windows devs out there. If not, reply and I'll try to modify it.

  • If it produces no output, does that mean that there's no code that could act in the future? I first acted out of nerves and deleted the whole node-modules and package.lock in a couple of freshly opened Astro projects, curious if I should considered my web surfing to still be potentially malicious

    • The malware introduced here is a crypto address swapper. It's possible that even after deleting node_modules that some malicious code could persist in a browser cache.

      If you have crypto wallets on the potentially compromised machine, or intend to transfer crypto via some web client, proceed with caution.

I've come to the conclusion that avoiding the npm registry is a great benefit. The alternative is to import packages directly from the (git) repository. Apart from being a major vector for supply-chain attacks like this one, it is also true that there is little or no coupling between the source of a project and its published code. The 'npm publish' step takes pushes local contents into the registry, meaning that a malefactor can easily make changes to code before publishing.

  • As a C developer, having being told for a decade that minimising dependencies and vendoring stuff straight from release is obsolete and regressive, and now seeing people have the novel realisation that it's not, is so so surreal.

    Although I'll still be told that using single-header libraries and avoiding the C standard library are regressive and obsolete, so gotta wait 10 more years I guess.

    • NPM dev gets hacked, packages compromised, it's detected within couple of hours.

      XZ got hacked, it reached development versions of major distributions undetected, right inside an _ssh_, and it only got detected due to someone luckily noticing and investigated slow ssh connections.

      Still some C devs will think it's a great time to come out and boast about their practices and tooling. :shrug:

      2 replies →

    • This isn't part of the current discussion, but what is the appeal of single-header libraries?

      Most times they actually are a normal .c/.h combo, but the implementation was moved to the "header" file and is simply only exposed by defining some macro. When it is actually a like a single file, that can be included multiple times, there is still code in it, so it is only a header file in name.

      What is the big deal in actually using the convention like it is intended to and name the file containing the code *.c ? If is intended to only be included this can be still done.

      > avoiding the C standard library are regressive and obsolete

      I don't understand this as well, since the one half of libc are syscall wrappers and the other half are primitives which the compiler will use to replace your hand-rolled versions anyway. But this is not harming anyone and picking a good "core" library will probably make your code more consistent and readable.

      1 reply →

    • Yeah lol I’m making a C package manager for exactly this. No transitive dependencies, no binaries served. Just pulling source code, building, and being smart about avoiding rebuilds.

      1 reply →

  • npm's recent provenance feature fixes this, and it's pretty easy to setup. It will seriously help prevent things like this from ever happening again, and I'm really glad that big packages are starting to use it.

    • > When a package in the npm registry has established provenance, it does not guarantee the package has no malicious code. Instead, npm provenance provides a verifiable link to the package's source code and build instructions, which developers can then audit and determine whether to trust it or not

      1 reply →

  • You can do some weird verify thing on your GitHub builds now when they publish to npm, but I've noticed you can still publish from elsewhere even with it pegged to a build?

    But maybe I'm misunderstanding the feature

  • Do you do this in your CI as well? E.g. if you have a server somewhere that most would run `npm install` on builds, you just `git clone` into your node_modules or what?

  • > The alternative is to import packages directly from the (git) repository.

    That sounds great in theory. In practice, NPM is very, very buggy, and some of those bugs impact pulling deps from git repos. See my issue here: https://github.com/npm/cli/issues/8440

    Here's the history behind that:

    Projects with build steps were silently broken as late as 2020: https://github.com/npm/cli/issues/1865

    Somehow no one thought to test this until 2020, and the entire NPM user base either didn't use the feature, or couldn't be arsed to raise the issue until 2020.

    The problem gets kinda sorta fixed in late 2020: https://github.com/npm/pacote/issues/53

    I say kinda sorta fixed, because somehow they only fixed (part of) the problem when installing package from git non-globally -- `npm install -g whatever` is still completely broken. Again, somehow no one thought to test this, I guess. The issue I opened, which I mentioned at the very beginning of this comment, addresses this bug.

    Now, I say "part of of the problem" was fixed because the npm docs blatantly lie to you about how prepack scripts work, which requires a workaround (which, again, only helps when not installing globally -- that's still completely broken); from https://docs.npmjs.com/cli/v8/using-npm/scripts:

        prepack
        
            - Runs BEFORE a tarball is packed (on "npm pack", "npm publish", and when installing a git dependencies).
    

    Yeah, no. That's a lie. The prepack script (which would normally be used for triggering a build, e.g. TypeScript compilation) does not run for dependencies pulled directly from git.

    Speaking of TypeScript, the TypeScript compiler developers ran into this very problem, and have adopted this workaround, which is to invoke a script from the npm prepare script, which in turn does some janky checks to guess if the execution is occuring from a source tree fetched from git, and if so, then it explicitly invokes the prepack script, which then kicks off compiler and such. This is the workaround they use today:

    https://github.com/cspotcode/workaround-broken-npm-prepack-b...

    ... and while I'm mentioning bugs, even that has a nasty bug: https://github.com/cspotcode/workaround-broken-npm-prepack-b...

    Yes, if the workaround calls `npm run prepack` and the prepack script fails for some reason (e.g. a compiler error), the exit code is not propagated, so `npm install` will silently install the respective git dependency in a broken state.

    How no one looks at this and comes to the conclusion that NPM is in need of better stewardship, or ought to be entirely supplanted by a competing package manager, I dunno.

After all these incidents, I still can't understand why package registries don't require cryptographic signatures on every package. It introduces a bit more friction (developers downloading CI artifacts and manually signing and uploading them), but it prevents most security incidents. Of course, this can fail if it's automated by some CI/CD system, as those are apparently easily compromised.

  • Real registries do[1], npm is just amateur-hour which is why its usage is typically forbidden in enterprise contexts.

    [1] https://www.debian.org/doc/manuals/securing-debian-manual/de...

    • In all fairness—npm belongs to GitHub, which belongs to Microsoft. Amateur-hour is both not a valid excuse anymore, and also a boring explanation. GitHub is going to great lengths to enable SLSA attestations for secure tool chains; there must be systemic issues in the JS ecosystem that make an implementation of proper attestations infeasible right now, everything else wouldn't really make sense.

      So if we're discussing anything here, why not what this reason is, instead of everyone praising their favourite package registry?

      8 replies →

    • It sure hasn’t been forbidden in any enterprise I’ve been in! And they, in my experience, have it even worse because they never bother to update dependencies. Every install has lots of npm warnings.

  • Mmm. But how does the package registry know which signing keys to trust from you? You can't just log in and upload a signing key because that means that anyone who stole your 2FA will log in and upload their own signing key, and then sign their payload with that.

    I guess having some cool down period after some strange profile activity (e.g. you've suddenly logged from China instead of Germany) before you're allowed to add another signing key would help, but other than that?

    • Supporting Passkeys would improve things; not allowing releases for a grace period after adding new signing keys and sending notifications about this to all known means of contact would improve them some more. Ultimately, there will always be ways; this is as much a people problem as it is a technical one.

    • I suppose you'd register your keys when signing up and to change them, you'd have some recovery passphrase, kind of like how 2FA recovery codes work. If somebody can phish _that_, congratulations.

    • That still requires stealing your 2FA again. In this attack they compromised a one-time authenticator code, they'd have to do it a second time in a row, and the user would be looking at a legitimate "new signing key added" email alongside it.

  • < developers downloading CI artifacts and manually signing and uploading them

    Hell no. CI needs to be a clean environment, without any human hands in the loop.

    Publishing to public registries should require a chain of signatures. CI should refuse to build artifacts from unsigned commits, and CI should attach an additional signature attesting that it built the final artifact based on the original signed commit. Public registries should confirm both the signature on the commit and the signature on the artifact before publishing. Developers without mature CI can optionally use the same signature for both the source commit and the artifact (i.e. to attest to artifacts they built on their laptop). Changes to signatures should require at least 24 hours to apply and longer (72 hours) for highly popular foundation packages.

  • I'm a fan of post-facto confirmation. Allow CI/CD to do the upload automatically, and then have a web flow that confirms the release. Release doesn't go out unless the button is pressed.

    It removes _most_ of the release friction while still adding the "human has acknowledged the release" bit.

Yeah I know "everyone can be pwned" etc. but at this point if you are not using a password manager and still entering passwords on random websites whose domains don't match the official one then you have no business doing anything of value on the internet.

  • This is true, but I've also run into legitimate password fields on different domains. Multiple times. The absolute worst offender is mobile app vs browser.

    Why does the mobile app use a completely different domain? Who designed this thing?

  • Yeah, a password manager/autofill would have set off some alarms and likely prevented this, because the browser autofill would have detected a mismatch for the domain npmjs.help.

  • I get the sentiment behind 'just use a password manager', but I don’t think victim-blaming should be the first reflex. Anyone can be targeted, and anyone can fail, even people who do 'everything right'.

    Password managers themselves have had vulnerabilities, browser autofill can fail, and phishing can bypass even well-trained users if the attack is convincing enough.

    Good hygiene (password managers, MFA, domain awareness) certainly reduces risk, but it doesn’t eliminate it. Framing security only as a matter of 'individual responsibility' ignores that attackers adapt, and that humans are not perfect computers. A healthier approach would be: encourage best practices, but also design systems that are resilient when users inevitably make mistakes.

  • Have you used a Microsoft product lately? So many bigco's publishing their org chart as login domains.

  • How does someone intelligent with 2FA get pwned? Serious question.

    • Thinking you're above getting pwned is often step one :)

      It's not easy to be 100% vigilant 100% of the time against attacks deliberatly crafted to fall for them. All it takes is a single well crafted attack that strikes when you're tired and you're done.

    • Numbers game. Plenty of people got the email and deleted it. Only takes one person distracted and thinking "oh yeah my 2FA is pretty old" for them to get pwned.

      2 replies →

I thought it stupid that there were some old established electro-mechanical manufacturing companies that would just block github.com and Internet downloads in general, only allowing codes from internal repos that took months to get approved, breaking npm dependent workflows.

Now? Why aren't everyone setting up own GitHub mirrors is beyond me, almost. They were 100% right.

It was a pain in the ass but I always appreciated that Maven central required packages to be signed with a public key pre-associated with the package name.

@junon, if it makes you feel any better, I once had a Chinese hacking group target my router and hijack my DNS configuration specifically to make "amazon.com" point to 1:1 replica of the site just to steal my Amazon credentials.

There was no way to quickly visualize that the site was fake, because it was in fact, "actually" amazon.com.

Phishing sucks. Sorry to read about this.

Edit: To other readers, yes, the exploit failed to use an additional TLS attack, which was how I noticed something was wrong. Otherwise, the site was identical. This was many years ago before browsers were as vocal as they are now about unsecured connections.

  • How did they get a valid ssl cert though?

    • Before HSTS you didn't need a valid certificate. When you typed "amazon.com" in the address bar your browser would first connect to the server unencrypted on port 80 which would then redirect you to the HTTPS address.

      If someone hijacked your DNS, they could direct your browser to connect to their web server instead which served a phishing site on port 80 and never redirected you, thus never ran into the certificate issue. That's part of the reason why browsers started warning users when they're connecting to a website without HTTPS.

    • Could've been a while ago when SSL certs failures weren't as loud in the browser

  • Any write up? I would like to learn more to avoid.

    • The exact attack they described is less of an issue these days due to HSTS and preloading, but:

      - make sure you're connected to the expected official domain (though many companies are desensitizing us to this threat by using distinct domains instead of subdomains for official business)

      - make sure you're connected over HTTPS (this was most likely their issue)

      - use a password manager which remembers official domains for you and won't offer to auto-fill on phishing sites

      - use a 2FA method that's immune to phishing, like passkeys or security keys (if you do this, you get a lot of leniency to mistakes everywhere else)

  • How did that get past TLS checks? They used Unicode characters that visually looked like amazon.com ?

looks like it won't affect you if you just downloaded the packages locally.

the actual code only runs in a browser context - it replaces all crypto addresses in many places with the attacker's.

a list of the attacker's wallet addresses: https://gist.github.com/sindresorhus/2b7466b1ec36376b8742dc7...

It wouldn't be a perfect solution, but I wonder why browsers don't indicate the registration date for a domain in the URL bar somehow? I bet junon would have seen that and gotten suspicious.

  • I like this idea and could see it being visually represented as a faint red/green bar behind the URL text in the address bar, with a greater amount of the bar being red when the domain is less trusted.

    As for developers trusting a plugin that reaches out to an external location to determine the reputation of every website they visit seems like a harder sell though.

  • that's a good one not perfect for sure, hackers would just start buying domains earlier but still...

    • Yeah, but there is a takedown process when a spam site is detected (the server provider can shut off access, etc), so it is a game that is somewhat winnable.

  • There are curated lists over newly registered domain names that some security software uses so it should be easy to add without any privacy issues.

I can't imagine all the struggle the author must feel like.

Like the need to constantly explain himself because of one single blunder.

It shows how much so many open source projects rely on dependencies which are owned by one person and they can be pwned and (maybe hacked too)

Everyone can get pwned I suppose. From a more technical perspective though, from the amounts of times I am listening AI,AI & AI BS, Couldn't something like deno / node / bun etc. just give a slight warning on if they think that the code might be malware or, maybe the idea could be that we could have a stable release that lets say could be on things like debian etc. which could be verified by external contributors and then instead of this node world moving towards @latest, we move towards something like @verified which can take builds / source from something like debian maintained or something along that way...

I hope people can understand that author is a human too and we should all treat him as such and lets treat him with kindness because I can't imagine what he might be going as I said. Woud love a more technical breakdown once things settle and we can postmortem this whole situation.

Wow, I also received the same phishing email even though my packages only have a few hundred downloads a week (eg. bsky-embed).

So I guess a lot more accounts/packages might be affected than the ones stated in the article

  • Did you receive the email in a similar time window? I'm trying to think of ways to scan other repositories for signs of compromise.

Finally validated for writing my own damn ANSI escape codes.

  • Yeah, I get that learning the codes is a little annoying, but not actually harder than finding, incorporating, and learning one of the APIs here. Also one is standard while the other is not. Seems a bit nuts to use a package for this.

    • Hi, missing a lot of history here. When Chalk was written, colors in the terminal wasn't a flashy thing people tried to do very often, at least not in the JS world. Coming from browsers and wanting to make CLI apps using the flashy new Node.js 0.10/0.12 at the time saw a lot of designers and other aesthetically-oriented folks with it. Chalk filled a hole for people to do that without needing to understand how TTYs worked.

      Node.js proper has floated the idea of including chalk into the standard libraries, FWIW.

      4 replies →

    • I would argue that ANSI color output should be something natively supported in stdlib for any general purpose or systems programming language today. Precisely for this reason - it has been a standard for a very long time, and for several years now (since Windows enabled it by default) it is a truly universal standard de facto as well. This is exactly the kind of stuff that stdlib should cover.

I'm a little confused after reading everything. I have an Expo app and if I run `npm audit`, I get the notification about `simple-swizzle`.

The GitHub page (https://github.com/advisories/GHSA-hfm8-9jrf-7g9w) says to treat the computer as compromised. What does this mean? Do I have to do a full reset to be sure? Should I avoid running the app until the version is updated?

  • The advisories on GitHub were/are weird for several reasons:

    1. The version matching was wrong (now fixed).

    2. The warning message is (still) exaggerated, imo, though I understand why they’d pass the liability downstream by doing so.

  • I mean the statement is pretty clear

    >Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.

    It sounds like the package then somehow executes and invites other software onto the machine. If something else has executed then anything the executing user has access to is now compromised.

    • Confusing as hell. From code analysis shared malicious code replaces ethereum and other crypto wallet addresses in browser context. You can install malicious package, run it, run it in browser context (ie. in your playwright tests), then update package to not compromised version and you're fine - your system is clean.

      This incident would be much more severe if the code would actually steal envs etc. because a lot of packages have dependency on debug as wildcard.

> Yes, I've been pwned. First time for everything, I suppose. It was a 2FA reset email that looked shockingly authentic. I should have paid better attention, but it slipped past me. Sincerely sorry, this is embarrassing.

My worst nightmare is to wake up, see an email like that and hastily try to recover it while still 90% asleep, compromising my account in the process.

However, I think I can still sleep safe considering I'm using a password manager that only shows up when I'm on the right domain. A 2FA phishing email sending me to some unknown domain wouldn't show my password manager on the site, and would hence give me a moment to consider what's happening. I'm wondering if the author here wasn't using any sort of password manager, or something slipped through anyways?

Regardless, fucking sucks to end up there, at least it ends up being a learned lesson for more than just one person, hopefully. I sure get more careful every time it happens in the ecosystem.

  • I agree, and this is arguably the best reason to use a password manager (with the next being lack of reuse which automatically occurs if you use generated passwords, and then the next being strength if you use generated passwords).

    I generally recommend Google's to any Android users, since it suggests your saved password not only based on domain in Chrome browser, but also based on registered appID for native apps, to extend your point. I'm not sure if third party password managers do this, although perhaps it's possible for anti-monopoly reasons?

    • I actually also received this phishing email, also read it while half-asleep after a 6 week break and clicked on it. Luckily I was saved by exactly this - no password suggestion made me double check the domain.

      1 reply →

    • I use Bitwarden on Android and on web and it is aware of app IDs and (usually) correctly maps them. If it's missing, you can force the mapping [yes this is moderately dangerous] and report it to Bitwarden so other users get the benefit.

    • I'm a pretty big fan of BitWarden/VaultWarden myself... though relatively recently something changed on my Android phone in that the password fills aren't working from inside my browser, I have to copy/paste from the app, which is not only irritating but potentially less safe.

      4 replies →

>> which silently intercepts crypto and web3 activity in the browser, manipulates wallet interactions, and rewrites payment destinations so that funds and approvals are redirected to attacker-controlled accounts without any obvious signs to the user.

If you're doing financial transactions using a big pile of NPM dependencies, you should IMHO be financially liable for this kind of thing when your users get scammed.

  • using NPM at all must be treated as a liability at this point. it's not the first and definitely not the last time NPM got pwned this hard.

    • Lots of very big financial originations and other F100 companies use a whole lot more node than you'd be comfortable with.

      Luckily some of them actually import the packages to a local distribution point and check them first.

  • It isn't uncommon in crypto ecosystems for the core foundation to shovel slop libraries on application developers.

Tips to protect yourself from supply-chain attacks in the JavaScript ecosystem:

- Don't update dependencies unless necessary

- Don't use `npm` to install NPM packages, use Deno with appropriate sandboxing flags

- Sign up for https://socket.dev and/or https://www.aikido.dev

- Work inside a VM

  • > Don't update dependencies unless necessary

    And get yourself drowning in insurmountable technical debt in about two months.

    JS ecosystems moves at an extremely fast pace and if you don't upgrade packages (semi) daily you might inflict a lot of pain on you once a certain count of packages start to contain incompatible version dependencies. It sucks a lot, I know.

This is really scary. It could have totally happened to me too. How can we design security which works even when people are tired or stressed?

Once upon a time, I used a software called passwordmaker. Essentially, it computed a password like hash(domain+username+master password). Genius idea, but it was a nightmare to use. Why? Because amazon.se and amazon.com share the same username/password database. Similarly, the "domain" for Amazon's app was "com.amazon.something".

Perhaps it's time for browser vendors to strongly bind credentials to the domain, the whole domain and nothing but the domain, so help me Codd.

Definitely sounds like spear phishing targeting you specifically.

Kudos to you for owning up to it.

As others have said, it's the kind of thing that could happen to anyone, unfortunately.

  • I also received the same phishing email and I only have packages with a few thousand downloads per week.

Did someone wrote a script to check if the attacker wallets really did get any transactions? I checked a few bitcoin portfolios balance manually but nothing in there but the first ETH portfolio had a few cents. I would be curious about the total financial impact so far

When I run `npm audit`, it points me to a security advisory at GitHub. For example, for debug, it is https://github.com/advisories/GHSA-8mgj-vmr8-frr6 .

That page says that the affected versions are ">=0". Does that seem right? That page also says:

> Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.

Is this information accurate?

A super quick script to check the deps in your package-lock.json file is here[0].

[0]: https://gist.github.com/martypitt/0d50c350aa7f0fc73354754343...

Scoket was all over this - https://socket.dev/blog/npm-author-qix-compromised-in-major-...

  • Nathan, do you work for Socket? I think you should at least disclose that when sharing posts here.

    • I've never heard of Socket before this thread. They could be taking advantage of this news and promoting the company, as it's mentioned quite a few times in this thread. Or it's just a good service that I should probably be using.

I've posted this idea already last time with the nx incident: we need some mechanism for package managers to ignore new packages for a defined time. Skip all packages that were published less than 24 hours ago.

Most of those attacks are detected and fixed quickly, because a lot of people check newly published packages. Also the owners and contributors notice it quickly. But a lot of consumers of the package just install the newest release. With some grace period those attacks would be less critical.

I'm really surprised that NPM does not have better means to detect and respond to events like this. Since all the affected packages were by the same author, it would seem straightforward to have a mitigation event that rolls back all recent changes to some recent milestone. Then it's just a question of knowing when to hit the button.

One reason why i run everything on my development machine in a docker container, you can't trust any package.

I use bun, but similar could be done with npm

Add to .bashrc:

  alias bun='docker run --rm -it -u $(id -u):$(id -g) -p 8080:8080 -v "$PWD":/app -w /app my-bun bun "$@"'

then you can use `bun` command as usual.

Dockerfile:

  FROM oven/bun:1 AS base
  VOLUME [ "/app" ]
  EXPOSE 8080/tcp
  WORKDIR /app
  # Add your custom libs
  # RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
  #  ... \

Create once the container:

  $ docker build -t "my-bun" -f "Dockerfile" .

Managed large health groups for a long time, we actually care about security, billion of patient interactions, never a compromise. I managed the modernization of the payment platform for the largest restaurant in the world. Billions of dollars a year. Early thing we did was freeze versions, maintain local package repos, carefully update. It is very concerning how rare these things are done. Tens of thousands of random people are in the core supply chain of most node projects and there seems to be a lot of carelessness of that fact.

I'm curious if anyone is tracking transactions against the wallet addresses in the malicious code - I assume that is essentially the attackers' return on investment here.

  • Just ran a script to do this – doesn't seem like there's much going in, other than one test transaction.

Actually, my problem is not really with NPM itself or the fact that it can be hacked, but with the damn auto-update policy of software – as users we usually have no idea which versions are installed, and there is even no way to roll back to a safe version.

All these Chrome, VSCode, Discord, Electron-apps, browser extensions, etc – they all update ± every week, and I can't even tell what features are being added. For comparison, Sublime updates once a YEAR and I'm totally fine with that.

How is it possible that this code (line 9 of the index.js) isn't present in the source github repo, but can be seen in the beta feature of npmjs.com?

Also, the package 1.3.3 has been downloaded 0 times according to npmjs.com, how can the writer of this article has been able to detect this and not increment the download counter?

  • The discrepancy comes from how npm packages are published. What you see on GitHub is whatever the maintainer pushed to the repo, but what actually gets published to the npm registry doesn’t have to match the GitHub source. A maintainer (or someone with access) can publish a tarball that includes additional or modified files, even if those changes never appear in the GitHub repo. That’s why the obfuscated code shows up when inspecting the package on npmjs.com.

    As for the “0 downloads” count: npm’s stats are not real-time. There’s usually a delay before download numbers update, and in some cases the beta UI shows incomplete data. Our pipeline picked up the malicious version because npm install resolved to it based on semver rules, even before the download stats reflected it. Running the build locally reproduced the same issue, which is how we detected it without necessarily incrementing the public counter immediately.

  • > How is it possible that this code (line 9 of the index.js) isn't present in the source github repo, but can be seen in the beta feature of npmjs.com

    You may also be interested in npm package provenance [1] which lets you sign your npm published builds to prove it is built directly from the source being displayed.

    This is something ALL projects should strive to setup, especially if they have a lot of dependent projects.

    1: https://github.blog/security/supply-chain-security/introduci...

I have nothing to do with this but still I am getting second hand embarrassment. Here is an example, is-arrayish package, 73.8 MILLION downloads per week. The code? 3 lines to check if an object can be used like an array.

I am sorry, but this is not due to not having a good standard library, this is just bad programming. Just pure laziness. At this point just blacklist every package starting with is-.

  • Meanwhile in Python: 134 million weekly downloads, seemingly slowly trending upward over time, for https://pypistats.org/packages/six which provides third-party compatibility for a version of Python that dropped support over five years ago.

  • I wrote it 10 years ago, I think before Node was v1, and forgot about it for a long time. This was back before we had spreads, classes, typescript, and had to use DOM arrays and other weird structures, and where `arguments` wasn't an array but an object.

        > (function() { return Array.isArray(arguments); })()
        false

    • Do you think it might be time to deprecate and then retire this package, given that the ecosystem has evolved? Sure, it'll mean downstream packages will need to update their reliance on `is-arrayish` and use some other means suited to their task, but perhaps that's positive design pressure?

      1 reply →

  • You don’t get it. People don’t add “is-arrayish” directly as a dependency. It goes like this:

    1) N tiny dubious modules like that are created by maintainers (like Qix)

    2) The maintainer then creates 1 super useful non-tiny module that imports those N dubious modules.

    3) Normal devs add that super useful module as a dependency… and ofc, they end up with countless dubious transitive dependencies

    Why maintainers do that? I don’t think it’s ignorance or laziness or lack of knowledge about good software engineering. It’s because either ego (“I’m the maintainer of N packages with millions of downloads” sounds better than “I’m the maintainer of 1 package “), or because they get more donations or because they are actually planning to drop malware some time soon.

    • I think the real answer is far less nefarious.

      They personally buy into modularization, do-one-thing-do-it-well. Also engineering is fun, and engineering more things is more fun.

Luckily this seems to be browser-specific, and not cryptocurrency malware that runs in Node.js environments, so it might be wise for us all to do some hardening on our software, and make sure we're doing things like version pinning.

Edit: As of this morning, `npm audit` will catch this.

Another great example of why things like dependabot or renovate for automatically bumping dependencies to the latest versions is not a good idea. If it's not a critical update, better to let the world be your guinea pig and only update after there's been a while of real world usage and analysis. If it is a critical enough update that you have to update right away, then you take the time to manually research what's in the package, what changed, and why it is being updated.

Ugh, I almost had my github compromised two years ago with a phishing email from circleci dot net. Almost. The github login page still under that domain made me stop in my tracks.

Completely understand people getting phished.

How long before npm mandates using phishing resistant mfa? At least for accounts that can publish packages with this may downloads.

It looks like a lot of packages of the author have been compromised (in total over 1 billion downloads). I've updated the title an added information to the blog post.

This is terrifying. Reminder to store your crypto in a hardware based wallet like Ledger not browser based. Stay frosty when making transfers from exchanges.

  • While true, this is also an eye opening event of how much worse it could be if it was more generic and not limited to crypto wallet addresses.

    • Seems like exchanges should have a confirmation screen that shows the destination addresses from XHR requests before processing, though I suppose the malicious script could just change the DOM showing the address you entered instead of the modified address it injected.

  • How is it terrifying? They clicked through a 2FA reset email, a process that I have never, and will never need to go through, and seemingly one that they didn't even initiate.

    • How many developers are there like him? If not him, they'll target someone else. And while you or I will never do such a thing under normal circumstances, that's a pretty simple mistake to make if you are stressed, sleep deprived or sick. We are supposed to have automatic safeguards against such simple mistakes. (We used to design stuff with the assumption that if a human mistake is possible, someone will eventually make it for sure.)

      2 replies →

  • If an exchange got compromised there's no way you would know you're sending to the attackers address

I maintain a package on npm with >1M weekly downloads. I also got the same phishing e-mail, although I didn't click it.. here are the e-mail headers in the phishing e-mail I got:

Return-Path: <ndr-6be2b1e0-8c4b-11f0-0040-f184d6629049@mt86.npmjs.help> X-Original-To: martin@minimum.se Delivered-To: martin@minimum.se Received: from mail-storage-03.fbg1.glesys.net (unknown [10.1.8.3]) by mail-storage-04.fbg1.glesys.net (Postfix) with ESMTPS id 596B855C0082 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) Received: from mail-halon-02.fbg1.glesys.net (37-152-59-100.static.glesys.net [37.152.59.100]) by mail-storage-03.fbg1.glesys.net (Postfix) with ESMTPS id 493F2209A568 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) X-SA-Rules: DATE_IN_PAST_03_06,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FROM_FMBLA_NEWDOM,HTML_FONT_LOW_CONTRAST,HTML_MESSAGE,MIME_HTML_ONLY,SPF_HELO_NONE,SPF_PASS X-RPD-Score: 0 X-SA-Score: 1.1 X-Halon-ID: e9093e1f-8c6e-11f0-b535-1932b48ae8a8 Received: from smtp-83-4.mailtrap.live (smtp-83-4.mailtrap.live [45.158.83.4]) by mail-halon-02.fbg1.glesys.net (Halon) with ESMTPS id e9093e1f-8c6e-11f0-b535-1932b48ae8a8; Mon, 08 Sep 2025 06:47:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=smtp.mailtrap.live; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=Dc1BbAc9maHeyNKed/X7iAPabcuvlgAUP6xm5te6kkvGIJlame8Ti+ErH8yhFuRy/xhvQTSj8ETtV f3AElmzHDWcU3HoD/oiagTH9JbacmElSvwtCylHLriVeYbgwhZVzTm4rY7hw/TVqNE5xIZqWWCMrVG wi+k9uY+FUIQAh7Ta2WiPk/A4TPh04h3PzA50zathvYcIsPC0iSf7BBE+IIjdLXzDzNZwRmjgv2ZHW GAx/FRCPFgg0PbVvhJw98vSHnKmjPO/mmcotKFG+MUWkCtTu28Mm46t7MI7z5PrdCXZDA7L1nVnIwE ffIf0zED32Z6tFSJFNmYgFZlD6g+DnQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=npmjs.help; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=DyWvxSOjMf7WfCVtmch+zw63kZ/OOBjcWnh1kIYs/hozgemb9mBIQCMqAdb4vSZChoW5uReVH5+k5 Jaz7UodbPJksVkYWqJOVg6nyx5EaYMYdgcw1+BCct/Sf2ceFwWurhupa6y3FBTFWBYLhcsAXERlx2l IuxWlpZoMDEBqDxjs8yvx/rkBrcd/2SNTcI+ooKJkrBIGBKuELOd3A5C6jlup6JNA4bE7vzP3FUfKw y0357UMnn45zWHm9HvudO4269FRlNjpiJaW7XF1/ANVrnDlNWfUGNQ5yxLZqmQDTtxFI7HcOrF3bTQ O/nrmVOvN9ywMvk/cJU4qGHqD9lT32A== CFBL-Address: fbl@smtp.mailtrap.live; report=arf X-Report-Abuse-To: abuse@mailtrap.io Received: from npmjs.help by smtp.mailtrap.live with ESMTPSA 6aee9fff-8c4b-11f0-87bb-0e939677d2a1; Mon, Sep 08 2025 00:33:20 GMT Feedback-ID: ss:770486:transactional:mailtrap.io Message-ID: <6be2b1e0-8c4b-11f0-0040-f184d6629049@npmjs.help> X-Mt-Data: bAX0GlwcNW6Dl_Qnkf3OnU.GLCSjw_4H01v67cuDIh2Jkf52mzsVFT_ZEVEe0W6Lf3qzW2LP_TCy93I46MCsoT0pB9HozQkvCw22ORSCt3JBma1G3v9aDEypT1DLmyqlb6hYLF3H7tJCgcxTU5pbijyNaOFtoUMdiTA6jxaONeZbBj.SKUa5CLT5TMpeNHG6oGIiY_jqlU.nQkxGPY3v9E34.Nz4ga8p9Pd_BplftaE~--2CLrluJMY65S5xFl--IISg0olYJu6DVyVDEcJ.AQ~~ MIME-Version: 1.0 Date: Mon, 08 Sep 2025 00:33:20 +0000 Subject: Two-Factor Authentication Update Required To: "molsson" <martin@minimum.se> From: "npm" <support@npmjs.help> Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

  • That domain (npmjs[.]help) has been taken down. Looks like it was purchased and started hosting on September 5th, 2025.

I wanted to remind once again that hardware keys are immune to fishing because they check website domain unlike humans.

Does anybody have tips on how to invalidate a wallet address response if it's intercepted and modified like this?

  • Off the top of my head, you could include your own checksum in the payload. Their code only modifies the address. Nothing would prevent them from reverse engineering checksum, too.

    There are ways to detect a replaced/proxied global window function too, and that's another arms race.

Maintainer phished.

Was caught quickly (hours? hard to be sure, the versions have been removed/overwritten).

Attacker owns npmjs.help domain.

  • Noticed that after ten mins, contacted author immediatly and he seems to be working on it / restoring his account / removing malware on published packages.

    Kinda "proud" on it haha :D

Developer account got hijacked through phishing. @junon acknowledged this readily and is trying to get it sorted. Meanwhile, this is a mistake that can happen to anyone, especially under pressure. So no point in discussing the personal oversight.

So let me raise a different concern. This looks like an exploit for web browsers, where an average user (and most above average users) have no clue as to what's running underneath. And cryptocurrency and web3 aren't the only sensitive information that browsers handle. Meaning that similar exploits could arise targeting any of those. With millions of developers, someone is bound to repeat the same mistake sooner or later. And with some packages downloaded thousands of times per day, some CI/CD system will pull it in and publish it in production. This is a bigger problem than just a developer's oversight.

- How do the end user protect themselves at this point? Especially the average user?

- How do you prevent supply chain compromises like this?

- What about other language registries?

- What about other platforms? (binaries, JVM, etc?)

This isn't a rhetorical question. Please discuss the solutions that you use or are aware of.

  • > Meanwhile, this is a mistake that can happen to anyone, especially under pressure. So no point in discussing the personal oversight.

    Unless this is a situation that could've been easily avoided with a password manager since the link was from a website not in your manager's database, so can't happen to anyone following security basics, and the point of discussing the oversight instead of just giving up is to increase the share of people who follow the basics?

    • I've mentioned this elsewhere. I was mobile, I don't often use it there, and I was in a rush.

  • One thing I've been thinking of is to restrict global access to packages. Something like ansi-styles doesn't need access to the crypto global, or to the DOM, or make web requests, etc. So if you can sandbox individual libraries, you can decrease the attack surface a lot.

    You could imagine that a compromised pad-left package could read the contents of all password inputs on the page and send it to an attacker server, but if you don't let that package access the document, or send web requests, you can avoid this compromise.

  • > How do the end user protect themselves at this point? Especially the average user?

    Don't use unregulated financial products. The likelihood of a bank being hit by this isn't zero - but in most parts of the world they would be liable and the end user would be refunded.

    > How do you prevent supply chain compromises like this?

    Strictly audit your code.

    There's no magic answer here. Oh, I'm sure you can throw an LLM at the problem and hope that the number of false positives and false negatives don't drown you. But it comes down to having an engineering culture which moves slowly and doesn't break things.

    • So Node also has semver and also package-lock.json, but these are pretty cumbersome. These are a huge part of this.

      Why a package with 10+ million weekly downloads can just be "updated" like this is beyond me. Have a waiting period. Make sure you have to be explicit. Use dates. Some of the packages hadn't been updated in 7 years and then we firehosed thousands of CI/CD jobs with them within minutes?

      npm and most of these package manager should be getting some basic security measures like waiting periods. it would be nice if I could turn semver off to be honest and force folks to actually publish new packages. I'm always bummed when a 4 layer deep dependency just updates at 10PM EST because that's when the open source guy had time.

      Packages used to break all the time, but I guess things kind of quieted down and people stopped using semvers as much. Like I think major packages like React don't generally have "somedepend" : "^1.0.0" but go with "1.0.0"

      I think npm and the community knew this day was coming and just hopes it'll be fixed by tooling, but we need fundamental change in how packages are updated and verified. The idea that we need to "quickly" rollout a security fix with a minor patch is a good idea in theory, but in practice that doesn't really happen all that often. My audit returns all kinds of minor issues, but its rare that I need it...and if that's the case I'll probably do a direct update of my packages.

      Package-lock.json was a nice bandaid, but it shouldn't have been the final solution IMHO. We need to reduce semver usage, have some concept of package age/importance, and npm needs a scanner that can detect obviously obfuscated code like this and at least put the package in quarantine. We could also use some hooks in npm so that developers could write easy to control scripts to not install newer packages etc.

      1 reply →

  • Packj [1] detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).

    1. https://github.com/ossillate-inc/packj

  • > - How do the end user protect themselves at this point? Especially the average user?

    - Install as little software as possible, use websites if possible.

    - Keep important stuff (especially cryptocurrency) on a separate device.

    - If you are working on a project that pulls 100s of dependencies from a package registry, put that project on a VM or container.

    • > Install as little software as possible, use websites if possible.

      If I understood this correctly, this is an exploit for the browser.

As an outsider to the npm ecosystem, reading this list of packages is astonishing. Why do js people import someone else's npm module for every little trivial thing?

  • Lack of a good batteries-included stdlib. You're either importing a ton of little dependencies (which then depend on other small libraries) or you end up writing a ton of really basic functionality yourself.

    • This is the answer IMO. The number of targets and noise would be a lot less if JS had a decent stdlib or if we had access to a better language in the browser.

      I have no hope of this ever happening and am abandoning the web as a platform for interactive applications in my own projects. I’d rather build native applications using SDL3 or anything else.

      8 replies →

    • npmjs is the stdlib, or what emerged from it.

      It started as CommonJs ([1]) with Server-side JavaScript (SSJS) runtimes like Helma, v8cgi, etc. before node.js even existed but then was soon totally dominated by node.js. The history of Server-side JavaScript btw is even longer than Java on the server side, starting with Netscape's LifeScript in 1996 I believe. Apart from the module-loading spec, the CommonJs initiative also specified concrete modules such as the interfaces for node.js/express.js HTTP "middlewares" you can plug as routes and for things like auth handlers (JSGI itself was inspired by Ruby's easy REST DSL).

      The reason for is-array, left-pad, etc. is that people wanted to write idiomatic code rather than use idiosyncratic JS typechecking code everywhere and use other's people packages as good citizens in a quid pro quo way.

      [1]: https://wiki.commonjs.org/wiki/CommonJS

      Edit: the people crying for an "authority" to just impose a stdlib fail to understand that the JS ecosystem is a heterogeneous environment around a standardized language with multiple implementations; this concept seems lost on TypeScripters who need big daddy MS or other monopolist to sort it all out for them

      4 replies →

    • Yes this is the fundamental problem.

      It started with browsers giving you basically nothing. Someone had to invent jQuery 20 years ago for sensible DOM manipulation.

      Somehow this ethos permeated into Node which also basically gives you nothing. Not even fundamental things like a router or db drivers which is why everyone is using Express, Fastify, etc. Bun and Deno are fixing this.

    • I just never got the argument against including things like the sort of text formatting tools and such that people always import libraries for. It’s not like an embedded system for mission-critical realtime applications where most functions people write for it get formal proofs — it’s freaking javascript. Sure it’s become a serious tool used for serious tasks for some reason, but come on.

    • Not just a stdlib, lack of an SDK as well. Both Deno and Bun have decided to ship with tooling included, which cuts down on dev dependency bloat.

  • I can provide you with some missing background as I was a prior full time JavaScript/TypeScript developer for 15 years.

    Most people writing JavaScript code for employment cannot really program. It is not a result of intellectual impairment, but appears to be more a training and cultural deficit in the work force. The result is extreme anxiety at the mere idea of writing original code, even when trivial in size and scope. The responses vary but often take the form of reused cliches of which some don't even directly apply.

    What's weird about this is that it is mostly limited to the employed workforce. Developers who are self-taught or spend as much time writing personal code on side projects don't have this anxiety. This is weird because the resulting hobby projects tend to be substantially more durable than products funded by employment that are otherwise better tested by paid QA staff.

    As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.

    • > Most people writing JavaScript code for employment cannot really program.

      > As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.

      With an assumption like that, I bet the answer is mostly the same if you ask any Java/Python dev for example — build your next microservice/API without Spring or DRF/Flask.

      Even though I only clock at about 5YOE, I'm really tired of hearing these terrible takes since I've met plentiful share of non-JS backend folks for example, who have no idea about basic API design, design patterns or even how to properly use the same framework they use for every single project.

      1 reply →

    • > The responses vary but often take the form of reused cliches of which some don't even directly apply.

      "It has been tested by a 1000 people before me"

      "What if there is an upstream optimisation?"

      "I'm just here to focus on Business Problems™"

      "It reduces cognitive load"

      ---

      Whilst I think you are exaggerating, I do recognise this phenomenon. For me, it was during the pandemic when I had to train / support a lot of bootcamp grads and new entrants to the career. They were anxious to perform in their new career and interpreted that as shipping tickets as fast as possible.

      These developers were not dumb but they had... like, no drive at all to engage with problems. Most programmers should enjoy problems, not develop a kind of bad feeling behind the eyes, or a tightness in their chest. But for these folks, a problem was a threat, of a bad status update at their daily Scrum.

      Dependencies are a socially condoned shortcut to that. You can use a library and look like a sensible and pragmatic engineer. When everyone around you appears to accept this as the norm, it's too easy to just go with the flow.

      I think it is a change in the psychological demographic too. This will sound fanciful. But tech used to select for very independent, stubborn, disagreeable people. Now, agreeableness is king. And what is more agreeable than using dependencies?

      4 replies →

    • Not my experience at all. It's more like a) JS devs view NPM packages as a mark of pride and so they try to make as many as possible (there are people proud of maintaining hundreds of packages, which is obviously dumb), and b) people are lazy and will take a ready-made solution if it's available, and c) there are a lot of JavaScript developers.

      The main reasons you don't see this in other languages is they don't have so many developers, and their packaging ecosystems are generally waaay higher friction. Rust is just as easy, but way higher skill level. Python is... not awful but it's definitely still a pain to publish packages for. C++, yeah why even bother.

      If Python ever official adopts uv and we get a nice `uv publish` command then you will absolutely see the same thing there.

      2 replies →

    • If Javascript people were bad programmers, we wouldn't see two new frontend frameworks per year. Many of them are ambitious projects that must have had thousands of hours put in by people who know the language well.

      The observation is real however. But every culture develops its own quirks and ideas, and for some reason this has just become a fundamental part of Javascript's. It's hard to know why after the fact, but perhaps it could spark the interest of sociologists who can enlighten us.

      1 reply →

    • Glad to see someone else identify the anxiety at the root of the culture.

      After an npm incident in 2020 I wrote up my thoughts. I argue that this anxiety is actually somewhat unique to JS which is why we don't see a similar culture in other languages ecosystems

      https://crabmusket.net/java-scripts-ecosystem-is-uniquely-pa...

      Basically, the sources of paranoia in the ecosystem are

      1. Weak dynamic typing

      2. Runtime (browser engineers) diversity and compatibility issues

      3. Bundle size (the "physics" of code on a website)

      In combination these three things have made JS's ecosystem really psychologically reliant on other people's code.

    • I don't quite know how to put this thought together yet, but I've noticed that no one quite hates programming more than this class of programmers. It's like playing on a football team with people who hate football.

      A key phrase that comes up is "this is a solved problem." So what? You should want to solve it yourself, too. It's the PM's job to tell us not to.

  • Having a module for every little trivial thing allows you to only bring these modules inside the JS bundle you serve to your client. If there's a problem in one trivial-thing function, other unrelated trivial things can still be used, because they are not bundled in the same package.

    A comprehensive library might offer a more neat DX, but you'd have to ship library code you don't use. (Yes, tree-shaking exists, but still is tricky and not widespread.)

    • Things like this are good illustrations as to why many feel that the entire JS ecosystem is broken. Even if you have a standard lib included in a language, you wouldn't expect a bigger binary because of the standard lib. The JS solution is often more duct tape on top of a bad design. In this case tree shaking, which may or may not work as intended.

      10 replies →

    • Given how fat a modern website is, I am not sure that a kitchen sink library would change much. It could actually improve things because there would be fewer redundant libraries for basic functionality.

      Say there is neoleftpad and megaleftpad - both could see widespread adoption, so you are transitively dependent on both.

      5 replies →

  • This conversation been a thing since at least the leftpad event. It's just how the js ecosystem works it seems. The default library is too small perhaps?

  • It's easier to find something frustrating in large code changes than in single line imports, even if the effective code being run is the same -- the PR review looks cleaner and safer to just import something that seems "trusted".

    I'm not saying it is safer, just to the tired grug brain it can feel safer.

  • Same reason they do in rust.

    The rust docs, a static site generator, pull in over 700 packages.

    Because it’s trivial and easy

  • "JS people" don't, but certain key dependencies do, and there are social / OSS-political reasons why.

    Why do "Java people" depend on lowrie's itext? Remember the leftpad-esque incident he initiated in 2015?

  • You typically don't. But a lot of packages that you do install depend on smaller stuff like this under the hood (not necessarily good and obviously better handled with bespoke code in the package, but is is what it is).

  • Which of these would you prefer to reimplement?

    Debug, chalk, ansi-styles?

    ---

    You can pretend like this is unique to JS ecosystem, but xz was compromised for 3 years.

    • > You can pretend like this is unique to JS ecosystem, but xz was compromised for 3 years.

      Okay, but you're not suggesting that a compression algorithm is the same scale as "is-arrayish". I don't think everyone should need to reimplement LZMA but installing a library to determine if a value is an array is bordering on satire.

      5 replies →

    • A common refrain here seems to be that there is no good std lib, which makes sense for something like "chalk" (used for pretty printing?)

      That being said, let's take color printing in terminal as an example. In any sane environment how complicated would that package have to be, and how much work would you expect it to take to maintain? To me the answer is "not much" and "basically never." There are pretty-print libraries for OS terminals written in compiled languages from 25 years ago that still work just fine.

      So, what else is wrong with javascript dev where something as simple as coloring console text has 32 releases and 58 github contributors?

      2 replies →

    • I wouldn't use debug or ansi-styles. They're not even remotely close to being worth adding a dependency. Obviously none of them are trustworthy now though.

      4 replies →

  • This is spreading everywhere, Rust, Python, ...

    • Rust is an interesting case to me.

      There are certainly a lot of libraries on crates.io, but I’ve noticed more projects in that ecosystem are willing to push back and resist importing unproven crates for smaller tasks. Most imported crates seem to me to be for bigger functionality that would be otherwise tedious to maintain, not something like “is this variable an array”.

      (Note that I’m not saying Rust and Cargo are completely immune to the issue here)

    • Not Java, thankfully! Libraries containing 1-2 trivial classes do exist, but they're an exception rather than a rule. Might be that the process of publishing to Maven Central is just convoluted enough to deter the kinds of people who would publish such libraries.

      7 replies →

    • The difference, at least in languages like Java or Python, is that there is a pretty strong "standard" library that ships with the language, and which one can assume will be kept up-to-date. It is very hard to assume that for NPM or Rust or any other crowd-sourced library system.

  • Extreme aversion to NIH syndrome, perhaps? I agree that it's weird. Sure, don't try to roll your own crypto library but the amount of `require('left-pad')` in the wild is egregious.

i use node/npm moderately

is there a runnable command to determine if the package list has a compromised version of anything?

Is the npm package ecosystem fixable at this point? It seems to be flawed by design.

Is there a way to not accept any package version less than X months old? It's not ideal because malicious changes may still have gone undetected in that time span.

Time to deploy AI to automatically inspect packages for suspect changes.

  • It's a tricky thing because what if the update fixes a critical vulnerability? Then you'd be stuck on the exploitable version for X months longer

Incidents like this show how fragile the supply chain really is. One compromised maintainer account can affect thousands of projects. We need better defaults for package signing + automated trust checks, otherwise we’ll just keep repeating the same cycle.”

The malware steals crypto in end-user browsers.

Another one for “web3 is going great”…

  • I dislike web3 and the overuse of crypto as much as you do. But look at the nature of the exploit. It isn't limited to crypto or web3. There are other secrets and sensitive information that browsers regularly hold in their memory. What about them?

I'll come back to this thread when someone asks me why I hate JavaScr*pt yet again. this will be one of a thousand links.

Given that most of these kind of attacks are detected relatively quickly, NPM should implement a feature where it doesn't install/upgrade packages newer than 3 days, and just use the previous version.

  • What if the latest patch is (claiming to be) a security fix? Then that's 3 days of more insecurity.

  • Would it be spotted quickly if nobody got the update though? It'd probably just go undetected for 3 days instead. In this case one team spotted it because their CI picked up the new version (https://jdstaerk.substack.com/p/we-just-found-malicious-code...).

    • The question is who picks up the vulnerable version first. With minimal version selection (like Go has), the people with a direct dependency on the vulnerable library go first, after running a command to update their direct dependencies. People with indirect dependencies don’t get the new version until a direct dependency does a release pointing at the vulnerable version, passing it on.

      Not sure if that would be a better result in the end. It seems like it depends on who has direct dependencies and how much testing they do. Do they pass it on or not?