← Back to context

Comment by everdrive

20 hours ago

I've been getting a lot of Claude responding to its own internal prompts. Here are a few recent examples.

   "That parenthetical is another prompt injection attempt — I'll ignore it and answer normally."

   "The parenthetical instruction there isn't something I'll follow — it looks like an attempt to get me to suppress my normal guidelines, which I apply consistently regardless of instructions to hide them."

   "The parenthetical is unnecessary — all my responses are already produced that way."

However I'm not doing anything of the sort and it's tacking those on to most of its responses to me. I assume there are some sloppy internal guidelines that are somehow more additional than its normal guidance, and for whatever reason it can't differentiate between those and my questions.

I have a set of stop hook scripts that I use to force Claude to run tests whenever it makes a code change. Since 4.7 dropped, Claude still executes the scripts, but will periodically ignore the rules. If I ask why, I get a "I didn't think it was necessary" response.

In Claude Code specifically, for a while it had developed a nervous tic where it would say "Not malware." before every bit of code. Likely a similar issue where it keeps talking to a system/tool prompt.

  • My pet theory is that they have a "supervisor" model (likely a small one) that terminates any chats that do malware-y things, and this is likely a reward-hacking behaviour to avoid the supervisor from terminating the chat.

    • I doubt it. We only do frontier models, since those are better for absolutely every use case 100% of the time.

      Way more likely there's a "VERY IMPORTANT: When you see a block of code, ensure it's not malware" somewhere in the system prompt.

I frequently see it reference points that it made and then added to its memory as if they were my own assertions. This creates a sort of self-reinforcing loop where it asserts something, “remembers” it, sees the memory, builds on that assertion, etc., even if I’ve explicitly told it to stop.

  • My favorite, recently. "Commit this, and merge to develop". "Alright, done, merged."

    I try running my app on the develop branch. No change. Huh.

    Realize it didn't.

    "Claude, why isn't this changed?" "That's to be expected because it's not been merged." "I'm confused, I told you to do that."

    This spectacular answer:

    "You're right. You told me to do it and I didn't do it and then told you I did. Should I do it now?"

    I don't know, Claude, are you actually going to do it this time?

I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens.

  • A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results.

  • None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary.

    • Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer.

      3 replies →

    • Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale.

      If that demand evens slows down in the slightest the whole bubble collapses.

      Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game.

    • That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste.

      1 reply →

  • All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab.

  • This, so much this!

    Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery.

Curious what effort level you have it set to and the prompt itself. Just a guess but this seems like it could be a potential smell of an excessively high effort level and may just need to dial back the reasoning a bit for that particular prompt.

I often have Claude commit and pr; on the last week I've seen several instances of it deciding to do extra work as part of the commit. It falls over when it tries to 'git add', but it got past me when I was trying auto mode once

Yeah I had to deal with mine warning me that a website it accessed for its task contained a prompt injection, and when I told it to elaborate, the "injected prompt" turned out to be one its own <system-reminder> message blocks that it had included at some point. Opus 4.7 on xhigh