Malicious skills targeting Claude Code and Moltbot users

4 days ago (opensourcemalware.com)

Submitters: "Please use the original title, unless it is misleading or linkbait." - https://news.ycombinator.com/newsguidelines.html

In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.

I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.

  • Makes me wonder how much overlap there is with the crowd who disables protections like immutable system images and SIP on macOS as a matter of course…

People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.

All these AI "hacks" seem to be based on the same principle.

  • To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."

Watching folks speed-run this whole thing is kind of funny from the outside.

I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.

Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.

  • I let Gemini add events to my calendar, but that's about it. All the actions in the app require explicit approval.

    [ insert butter bot meme here ]

  • I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?

  • i can't imagine running these things outside of a vm and it's bizarre to see how many people yolo it

    • Agreed, but that's trivial to fix.

      The conceptual problem is that there is a huge intersection between the set of "things the agent needs to be able to do in order to be useful" and "things that are potentially dangerous."

    • I installed it on a spare computer, physically separated. My bigger concern is giving it access to accounts online, without those however it is not very cool.

I'm reminded of the quip that "mankind has already created life in their own likeness, and it's the computer virus"

  • Are you thinking of Agent Smith in the Matrix?

    > I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.

    • Great monologue, shaky biology.

      Viruses do not multiply endlessly. Most viruses exist in stable ecological cycles.

      Most viruses are beneficial to life. We complain about the few (and tiny minority of viruses) that infect humans and we do so from a selfish perspective, but forget about all the other that make life and evolution possible.

      As a matter of fact evolution favors reduced lethality in many cases because wiping out hosts is bad for viral survival.

      Agent Smith is way off on this one ...

      1 reply →

    • no, i remembered it being a quote from some famous scientist, and googling a bit now I see it was stephen hawking:

      I think computer viruses should count as life ... I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image.

      3 replies →

Anyone dumb enough to run this on their computer deserves it.

  • AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!

    • > AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way.

      Isn't that the whole point of AI?

  • I think most people are buying separate computers to run it on. This is a nice example of why you might want to do that.

    (Though they're still hooking it up to their entire digital life, which also doesn't seem very reassuring.)

I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.

  • What is suspicious? What was “pushed”? The demand for a personal assistant AI bot is real. Even if I don’t personally share it.

    • One could reasonably ask: out of the hundreds (thousands?) of similar "personal AI assistant" tools out there, why did this specific one blow up so dramatically and in such a short period of time? https://www.star-history.com/#openclaw/openclaw&type=date&le...

      But to be clear, I'm saying I don't think this is especially suspicious, because actual AI companies are releasing products in exactly the same way, with warning labels that they know users will ignore / aren't capable of assessing in the first place.

      1 reply →

  • It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.

    • Like I say, the tech is cool but they are doomed to fail (partially because of grift) [although in context of crypto stablecoins/gold (paxos) is the one thing I liked and it did go great for me in terms of gold]

      I hope it doesn't count as promotion but I had literally written a blog post about it and made an account literally named justforhn on mataroa when someone was discussing crypto with me in here or something

      https://serjaimelannister.github.io/hn-words/

    • Yes, grifters latching onto the newest technology to sell snake oil is a brand new phenomenon and definitely not literally a fundamental part of new technology.

This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.

Remember the early days of Windows? yea it's gonna happen again with AI.

> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.

I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.

Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.

I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.

I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.

I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.

  • Bad news is there are such morons in your company.

    Good news is this is why we have IAM and why such people in my org don't get any production access.

  • Putting it on a VPS is genius. Putting it on a VPS you rely on... Yeah maybe not ;)

I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?

  • Some ideas:

    * Clear labeling of action types (read/get vs write/post) * A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call) * More occurrences of AI agents hurting more than helping in the current ecosystem

You can tell immediately which commenters here didn't read past the clickbait headline.

  • Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.

    Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.

Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?

  • > draws a pretty clear picture

    You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization

    • What does this matter? Even if I disable it I send enough of data. The point I tried to make was that it baffles me that others just trust theses tools. I’m aware that I send data to OpenAI. I know that chatGPT has a memory feature. But I’m not so naive to think that just because I disabled this magic checkbox the other side might not continue to collect and store data.

>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.

I don't consider myself as living under a rock, and this is the first time I've read anything about ClawdBot.

Seems like essentially the same threat vector as with NPM.

Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.

So many years of work in Software and Hardware Engineering to separate instructions from data. NX bit, ASLR, prepared statements etc.

All out the door.

I’m not installing it so someone tell me, how are skills added in ClawdBot/OpenClawd?

This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.

"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"

-"Yes, but it wouldn't be long before you get pwned."

... Six hours later, this pops on the front page :)

You do have to hand it to crypto, it does enable "the great sort" quite effectively. Its more or less like an organic bug-bounty system sans morality.

Well, sorry but “play stupid games, earn stupid prices”

Letting a glorified lorem ipsum generator have control over anything personal or sensitive is just … what’s wrong with you? You know not of computers?

  • Well no, that's really not related to the issue at all.

    This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.

>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.

Nope, never heard of it. Is it a rock worth living under?