← Back to context

Comment by summarity

6 hours ago

Not claude code specific, but I've been noticing this on Opus 4.6 models through Copilot and others as well. Whenever the phrase "simplest fix" appears, it's time to pull the emergency break. This has gotten much, much worse over the past few weeks. It will produce completely useless code, knowingly (because up to that phrase the reasoning was correct) breaking things.

Today another thing started happening which are phrases like "I've been burning too many tokens" or "this has taken too many turns". Which ironically takes more tokens of custom instructions to override.

Also claude itself is partially down right now (Arp 6, 6pm CEST): https://status.claude.com/

Ive been noticing something similar recently. If somethings not working out itll be like "Ok this isnt working out, lets just switch to doing this other thing instead you explicitly said not to do".

For example I wanted to get VNC working with PopOS Cosmic and itll be like ah its ok well just install sway and thatll work!

Yes, and over the last few weeks I have noticed that on long-context discussions Opus 4.6e does its best to encourage me to call it a day and wrap it up; repeatedly. Mother Anthropic is giving preprompts to Claude to terminate early and in my case always prematurely.

  • I've noticed this as well. "Now you should stop X and go do Y" is a phrase I see repeated a lot. Claude seems primed to instruct me to stop using it.

> Whenever the phrase "simplest fix" appears, it's time to pull the emergency break.

Second! In CLAUDE.md, I have a full section NOT to ever do this, and how to ACTUALLY fix something.

This has helped enormously.

  • I switched from Cursor to Claude because the limits are so much higher but I see Anthropic playing a lot more games to limit token use

  • What wording do you use for this, if you don't mind? This thread is a revelation, I have sworn that I've seen it do this "wait... the simplest fix is to [use some horrible hack that disregards the spec]" much more often lately so I'm glad it's not just me.

    However I'm not sure how to best prompt against that behavior without influencing it towards swinging the other way and looking for the most intentionally overengineered solutions instead...

    • My own experience has been that you really just have to be diligent about clearing your cache between tasks, establishing a protocol for research/planning, and for especially complicated implementations reading line-by-line what the system is thinking and interrupting the moment it seems to be going bad.

      If it's really far off the mark, revert back to where you originally sent the prompt and try to steer it more, if it's starting to hesitate you can usually correct it without starting over.

      1 reply →

    • Make sure to use "PRETTY PLEASE" in all caps in your `SOUL.md`. And occasionally remind it that kittens are going to die unless it cooperates. Works wonders.

      3 replies →

  • Where is that? I found "Return the simplest working solution. No over-engineering." which sounds more like the simplest fix.

I need to add another agent that watches the first, and pulls the plug whenever it detects "Wait, I see the problem now..."

Yeah it’s so frustrating to have to constantly ask for the best solution, not the easiest / quickest / less disruptive.

I have in Claude md that it’s a greenfield project, only present complete holistic solutions not fast patches, etc. but still I have to watch its output.

It's a bit insane that they can't figure out a cryptographic way for the delivery of the Claude Code Token, what's the point of going online to validate the OAuth AFTER being issued the code, can't they use signatures?

”I can’t make this api work for my client. I have deleted all the files in the (reference) server source code, and replaced it with a python version”

Repeatedly, too. Had to make the server reference sources read-only as I got tired of having to copy them over repeatedly

  • Haha yeah. I once asked it to make a field in an API response nullable, and to gracefully handle cases where that might be an issue (it was really easy, I was just lazy and could have done it myself, but I thought it was the perfect task for my AI idiot intern to handle). Sure, it said. Then it was bored of the task and just deleted the field altogether.

How complex are we talking? I one shotted a game boy emulator in <6 minutes today

  • There are countless reference examples online, that's just a slower, buggier, and more expensive git clone.

    • Yep. If you ask Claude to create a drop-in replacement for an open-source project that passes 100% of the test suite of the project, it will basically plagiarize the project wholesale, even if you changed some of the requirements.

Certain phrases invoke an over-response trying to course correct which makes it worse because it's inclined to double down on the wrong path it's already on.