70% of new NPM packages in last 6 months were spam

2 years ago (blog.phylum.io)

> Contrary to what npm states, this package actually depends on one of our aforementioned spam packages. This is a by-product of how npm handles and displays dependencies to users on its website.

For me personally, this is the biggest surprise and takeaway here. By simply having a key inside package.json's dependencies reference an existing NPM package, the NPM website links it up and counts it as a dependency, regardless of the actual value that the package references (which can be a URL to an entirely different package!). I think this puts an additional strain on an already fragile dependency ecosystem, and is quite avoidable with some checks and a little bit of UI work on NPM's side.

  • (Full disclosure: I'm one of the co-founders @ Phylum)

    We could do a full write-up on npm's quirks and how one could take advantage of them to hide intent.

    Consider the following from the post's package.json:

        "axios": "https://registry.npmjs.org/@putrifransiska/kwonthol36/-/kwonthol36-1.1.4.tgz"
    

    Here it's clear that the package links to something in a weird, non-standard way. A manual review would tell you that this is not axios.

    The package.json lets you link to things that aren't even on npm [1]. You could update this to something like:

        "axios": "git://cdnnpmjs.com/axios"
    

    And it becomes less clear that this is not the thing you were intending. But at least in this case, it's clear that you're hitting a git repository somewhere. What about if we update it to the following?

        "axios": "axiosjs/latest"
    

    This would pull the package from GitHub, from the org named "axiosjs" and the project named "latest". This is much less clear and is part of the package.json spec [2]. Couple this with the fact that the npm website tells you the project depends on Axios, and I doubt many people would ever notice.

    [1] https://docs.npmjs.com/cli/v10/configuring-npm/package-json#...

    [2] https://docs.npmjs.com/cli/v10/configuring-npm/package-json#...

  • This feels like the more important takeaway (and feels like an actual security bug), I'm surprised this so buried in the article...

  • You should think of the package metadata as originating from the publisher, not from the registry. Aside from the name, version, and (generated) dist and maintainers fields, I don't think any of it is even supposed to be validated by the registry?

    Agreed the website UX is confusing and could be better but in general package metadata is just whatever the publisher put there and it's up to you to verify if you care about veracity.

    • the fucking website processes it and after some mighty compute somehow shits out the wrong link. it's actively making things worse by trying to be helpful.

      confusing is one thing, but there's a screaming security chasm around that innocent little UX problem.

      MS bought npmjs and now it's LARPing as some serious ecosystem (by showing how many unresolved security notices installed packages have) while they cannot be arsed to correctly show what's actually in the metadata?

    • this is a little too stoic a take with respect to a tool that very unserious people building things for serious but non-technical people use on a daily basis. i think we should strive for more. npm can continue to exist in its very libertarian form, but perhaps there's room for something that cares a bit more about caution

How about removing the incentive? Take down every package with tea.yaml in it, after say 1 month's warning, so legitimate packages trying to use it don't leave their users in the lurch. The tea protocol is clearly not going to accomplish what it set out to (see below), and is instead incentivising malicious behaviour and damaging the system it set out to support.

From https://docs.tea.xyz/tea/i-want-to.../faqs: "tea is a decentralized protocol secured by reputation and incentives. tea enhances the sustainability and integrity of the software supply chain by allowing open-source developers to capture the value they create in a trustless manner."

  • > allowing open-source developers to capture the value they create

    But... then why would I use their code if whatever value it creates is captured by them the developers and so I am no better from where I was? That's like paying your employees the additional value they produce instead of the market wages: you then literally have no reason to hire them since their work is exactly profit-neutral.

    • As if the only goal a potential employer could possibly have is to accrue capital.

  • I combed through their docs to try to find how these tokens would actually make maintainers money and it seems like it people pay projects for fixing bug reports (and penalize them if they don't)? The other demand drivers of the token seem to just be shuffling money around and are at best a pyramid scheme. I'm a little confused how someone seriously thought this was gonna be a good idea.

  • That would be a clear violation of the npm Unpublish Policy[0]. If all it takes is some spam and pissing people off to walk away from principles, they never meant anything. A proper response needs to not break expectations like this.

    [0]: https://docs.npmjs.com/policies/unpublish

    • The entire NPM ecosystem is a garbage fire. Who cares about whatever 'principles' it supposedly has? Other than avoiding malware I can't think of something I care about less than whatever principles NPM / JS developers in general have because they've mostly been bad so far.

      I wouldn't be surprised if principles in this case leave us with thousands of spam packages degrading the node ecosystem forever. It'd be exactly what I expect. So I guess I should thank the principle of consistency.

      22 replies →

    • No, it isn't?

      The unpublish document describes the options that users of NPM have to remove packages themselves. It was created after some situation where someone unpublished an important package.

      A whole different set of terms governs which packages NPM can remove. This definitely includes these packages, either as "abusive" or "name squatting"

      Not only that, but NPM's TOS makes it very clear that you have no recourse if they decide to remove your package for any reason.

      1 reply →

    • Principles are a means to an end, not an end in themselves. The end here (presumably) is a healthy ecosystem, an end which this principle arguably harms more than it helps. Rigid and unthinking adherence to principles is dogmatic, and dogma has no place in engineering.

      3 replies →

    • Pragmatism trumps principles. In this case, it is better to unpublish these packages, than turn npm registry into a bigger garbage

    • Do principles matter if a registry becomes seen as spam or a security risk due to refusing to take action?

Why are these spam accounts not perma banned and removed?

For example, this[1] account mentioned in the article has 1781 packages of gibberish.

Also, the whole reporting process is onerous, there is a large form. Of course, gatekeeping on reporting is good, but there should be a possibility to report an entire profile of package publisher.

[1] https://www.npmjs.com/~eleanorecrockets

  • Isn't it better to leave accounts that correlate spam than to force spammers to obscure the connection by creating a new account for each piece of spam?

    • That primarily works if you can shadow ban the account. Otherwise the spam is still negatively impacting the community (ex. By polluting search results).

      1 reply →

    • That's not how spammers work. There is this profile with thousands and there are still hundreds of spam profiles with just a handful of packages yet. If you let them grow unchecked, they grow, exponentially. The broken Windows theory fits well here

      4 replies →

> Next, because the AI hype train is at full steam, we must point out the obvious. AI models that are trained on these packages will almost certainly skew the outputs in unintended directions. These packages are ultimately garbage, and the mantra of “garbage in, garbage out” holds true.

hmm, inspiring thoughts. An answer to "AI is going to replace software developers in the next 10 years" is to create 23487623856285628346 spam packages that contain pure garbage code. Humans will avoid, LLMs will hallucinate wildly.

  • We can also seed false information more generally, especially on Reddit which every AI company loves to scrape - less so on Hacker News. I recently learned that every sodium vapor streetlamp is powered by a little hamster running on a wheel. Isn't that interesting?

  • Most of the recent gains in LLM quality came from improving the quality of inputs (i.e. recognizing that raw unfiltered internet is not the ideal diet for growing reason).

    I don't know how good the filters are though, since they're mostly powered by LLMs...

  • That's not what "hallucination" is. Hallucinations in LLMs are when they unexpectedly and confidently extrapolate outside of their training set when you expected them to generate something interpolated from their training set.

    In your example that's just a pollution of the training set by spam, but that's not that much of an issue in practice, as AI has been better than humans at classifying spam for over a decade now.

    • This is confusing to read

      If I agree with your definition of hallucinations in the context of LLMs... Then isn't your second paragraph literally just a way to artificially increase the likelihood of them occurring?

      You seem to differentiate between a hallucination caused by poisoning the dataset vs a hallucination caused by correct data, but can you honestly make such a distinction considering just how much data goes into these models?

      1 reply →

    • > Hallucinations in LLMs are...

      Frankly, hallucination as used with LLMs today is not even really a technical term at all. It literally just means "this particular randomly sampled stream of language produced sentences that communicate falsehoods".

      There's a strong argument to be made that the word is actually dangerously misleading by implying that there's some difference between the functioning of a model while producing a hallucinatory sample vs when producing a non-hallucinatory sample. There's not. LLMs produce streams of language sampled from a probability distribution. As an unexpected side effect of producing coherent language these streams will often contain factual statements. Other times the stream contains statements that are untrue. "Hallucination" doesn't really exist as an identifiable concept within the architecture of the LLM, it's just a somewhat subjective judgement by humans of the language stream.

    • There’s just so much wrong here.

      So many mangling of meaning.

      Like the “AI” that detects spam is way different than LLMs.

The Tea protocol's flawed incentive model is a disaster, effectively encouraging developers to pollute npm with spam. It's a prime example of what happens when protocols prioritize quantity over quality, compromising the entire ecosystem.

TLDR:

1. a cryptocurrency scheme for funding OSS development[1] is incentivizing spammers to try and monetize NPM spam

2. it's easy to spoof your dependencies with package.json[2]

  "dependencies": {
    "axios": "https://registry.npmjs.org/@putrifransiska/kwonthol36/-/kwonthol36-1.1.4.tgz"
  }

[1]: https://tea.xyz/blog/the-tea-protocol-tokenomics

[2]: https://www.npmjs.com/package/sournoise?activeTab=code

  • A "better" way is to modify the package-lock.json. You can still spoof the package but almost no one actually reviews it as npm will usually modify 1000s of lines.

    for example take mongoose

          "resolved": "https://registry.npmjs.org/mongoose/-/mongoose-8.4.4.tgz",
          "integrity": "sha512-Nya808odIJoHP4JuJKbWA2eIaerXieu59kE8pQlvJpUBoSKWUyhLji0g1WMVaYXWmzPYXP2Jd6XdR4KJE8RELw==",
    
    

    so long as the integrity check passes for the resolve url npm will happily install it.

    • Hugely surprising that package.json and package-lock.json don't have to match. The way I would expect it to work is something like:

        for d in dependencies_from_package_json()
          get_package(d)
          if hash_package(d) != package_lock_hash(d)
            error()
          end
        end
      

      And not:

        use_package_lock_and_ignore_package_json_lol_fuck_you_haha_kthxbye()
      

      I also discovered that npm doesn't actually verify what's in node_modules when using "npm install". I found this out a few ago after I had some corrupted files due to a flake internet connection. Hugely confusing. Also doesn't seem to be a straightforward way to check this (as near I could find in a few minutes).

      But luckily "npm audit" will warn us about 30 "high severity" ReDos "high impact" "vulnerabilities" that can never realistically be triggered and are not really a "vulnerability" in the first place, let alone a "high impact" one.

      10 replies →

    • That (and anything else relying on the lockfile) won't take effect for users who install the package from the npm registry, unlike changes in package.json.

  • Re 2: How is that "spoofing"..?

    You just demonstrated the uglier package-manager-independent overrides(npm)/resolutions(yarn) aliternative method. Because for whatever reason they couldn't play nice with each other.

    npmjs.com seems to be interpreting the field incorrectly but 1) AIUI that does not affect actual npm usage, 2) If you rely on that website for supply-chain-security input I have bridge to sell... Basically all the manifest metadata is taken as-is and if the facts are important they should be separately verified out-of-band. Publishers could arbitrarily assign unassociated authors, repo URL, and so on.

    https://docs.npmjs.com/cli/v9/configuring-npm/package-json#o...

    https://classic.yarnpkg.com/lang/en/docs/selective-version-r...

I was sad to read this and thought "this is why we can't have nice things."

But following the links was fun and educational:

"The end goal here [of the Tea protocol] is the creation of a robust economy around open source software that accurately and proportionately rewards developers based on the value of their work through complex web3 mechanisms, programmable incentives, and decentralized governance."

Which lead to:

"The term cobra effect was coined by economist Horst Siebert based on an anecdotal occurrence in India during British rule. The British government, concerned about the number of venomous cobras in Delhi, offered a bounty for every dead cobra. Initially, this was a successful strategy; large numbers of snakes were killed for the reward. Eventually, however, people began to breed cobras for the income. When the government became aware of this, the reward program was scrapped. When cobra breeders set their snakes free, the wild cobra population further increased."

Which lead to:

"Goodhart's law is an adage often stated as, 'When a measure becomes a target, it ceases to be a good measure.'"

I recently stumbled upon a bunch of repos which were clearly copied from popular projects but then renamed with a random Latin name and published to npm.

I reported some of them as spam, but there were hundreds of them. I couldn't figure out why somebody would waste the time to do that, but now it makes sense.

There was a similar thing to tea a while back. I think I saw the project posted on here. Went to their github and found a typo in their Readme. Opened a pr with a correction and then they started sending me about a dollar in btc every month till they ran out of money and the project imploded.

I am really interested if that really matters.

Package managers often comes with rating system. npmjs has weekly downloads, pull requests, and other popularity scores.

I am layman in AI, but why would anyone think that this would affect anything, like AI? Why would anyone train on noname package, that noone uses?

Stats for spam packages can have higher-than-none stats, but that also makes them vulnerable for sweep removal of all potential spam packages, since they are connected, etc. etc.

Any credible company will not use a noname spam package, will verify their contents. That is at least what happened in all companies I have worked for.

  • > why would anyone think that this would affect anything, like AI? Why would anyone train on noname package, that noone uses?

    …almost certainly for the same reason that any “train AI using only good data, reduce hallucinations!” suggestion is in the “daydream” rather than “great idea” category.

    Creating high quality filtered datasets is enormously more time consuming and expensive than just dumping everything you can get you hands on in.

    It seems obvious to ignore packages that are obviously unused and spam, but tldr; no idiot is going to be pouring spam into npm unless there’s some kind of benefit from it; people accidentally using it, mixing it into the dependency tree of legit packages, etc.

    It’s more likely that the successful folk doing this aren’t being caught, and the ones being caught are “me too” idiots. Or, the spam is working and people are actually (for whatever incomprehensible reason) actually using at least some of the packages.

    TLDR; if dependency auditing and supply chain attack were trivial to solve, it wouldn’t be a problem.

    …but based on the fact that we continue endlessly to see these issues, you can assume that it’s probably more diff to solve than it trivially appears.

  • If you look at the purpose of this Tea protocol it is exactly to provide a chain of credibility. Though, by connecting ranking with monetization, tea has created perverse incentives, leading spammers to pump up their tea ranking, by linking and starring packages in circles. Their goal is to make it look like it’s a highly used package.

    Luckily, nobody thinks that tea ranking matters, except for the spammers themselves.

    They are with no doubt attempting to poke at other more established metrics as well. This could eventually fool an AI or even humans.

  • > Why would anyone train on noname package, that noone uses?

    Not that I disagree, but in the same line of thinking: Why would anyone train an LLM on some random blog written in broken English? Why would you train an LLM on the absolute dumpster fire that is Reddit comments? Or why is my Github repos with half-finished projects and horribly insecure coding practises being used as input to CoPilot? Yet here we are, LLMs writing broken, insecure code (just like a real person) and telling people to eat rocks.

  • Agree! Not only in companies, but I have never seen anyone download a package, without looking at Github stars

    The real fun would happen if the next incentive is to publish a package and get Github stars for that repo :-)

Spam is the least of the worries.

  • Yeah this when I see one of our pipelines pull in 300 npm packages I wonder how much we really know about what our systems do.

    • Heh, I work in a sector that works with some very large companies we all know the names of. I've seen applications that are seemingly very little code written by them but hundreds or thousands of packages/modules glued together. It is quite common that the tooling they use catch 'low reputation' packages where they've actually put the wrong package name in, then when it didn't work, add the package they needed but didn't remove the misnamed package.

      Completely terrifying to me.

I wonder what is the long term plan.

Maybe the next step is to sell the control of all these packages to a rogue entity to be used for a supply chain attack?

  • Would you be at all surprised? I'm fairly confident that like with browser addons, NPM package maintainers get offers from randoms to 'buy' their package in order to get backdoor access.

    A secured registry is long overdue, where every release gets an audit report verifying the code and authorship of a new release. It won't be nearly as fast as regular NPM package development but that's a good thing, this is intended for LTS versions for use in long-term software. It'd be a path to monetization as well, as the entities using a service like this is enterprise softare and both the author(s) of the package as the party doing the audit report would get a share.

  • Who says there is one? It takes basically zero effort to publish these packages, so why not do it? Script kiddie stuff. Lots of people run dumb unsuccessful hustles. The long term plan seems to be macaroni. That is: throw enough macaroni at a wall and hopefully some of it will stick. Or maybe not. Who cares? Wasn't my macaroni and I won't have to clean the wall.

Tea is absolutely NOT "taking steps to remediate this problem". They are grifters and part of their grift is claiming to take steps when called out.

I'm fairly proficient in Javascript, but mismanagement of the ecosystem like this is a major reason why any time I see that something requires Node.js, I just turn and run in the other direction. It's just not worth the headaches.

I mean realistically it's representative of the Internet as a whole. Makes me wonder where all the porn packages are.

The pulling in of unexpected dependent packages is a real issue though, how do other ecosystems deal with it? NPM is really missing some level of trust beyond just using "brand name" packages.

My general judgement is usually how often it's worked on/how many downloads it has but gut feel isn't really enough, is it?