← Back to context

Comment by divan

3 months ago

So any process on my computer could just start using Claude Code for their own purposes or what? o_O

Any postinstall script can add anything to your bashrc. I sometimes wonder how the modern world hasn't fallen apart yet.

  • I don't think this solves the world but as a quickfix for this particular exploit I ran:

    sudo chattr -i $HOME/.shrc

    sudo chattr -i $HOME/.profile

    to make them immutable. I also added:

    alias unlock-shrc="sudo chattr -i $HOME/.shrc"

    alias lock-shrc="sudo chattr +i $HOME/.shrc"

    To my profile to make it a bit easier to lock/unlock.

Yeah but so what? A process on your computer could do whatever it wants anyway. The article claims:

> What's novel about using LLMs for this work is the ability to offload much of the fingerprintable code to a prompt. This is impactful because it will be harder for tools that rely almost exclusively on Claude Code and other agentic AI / LLM CLI tools to detect malware.

But I don't buy it. First of all the prompt itself is still fingerprintable, and second it's not very difficult to evade fingerprinting anyway. Especially on Linux.

Yes. It's a whole new attack vector.

This should be a SEV0 at Google and Anthropic and they need to be all-hands in monitoring this and communicating this to the public.

Their communications should be immediate and fully transparent.

While this feels obvious once its pointed out, I don't think many people have considered it or its implications.

Even before AI the authors could have embeded shells in their software and manually done the same thing. This changes surprisingly little.