← Back to context

Comment by jakobnissen

20 hours ago

Yeah I’ve seen this too. It’s difficult for me to tell if the complaints are due to a legitimate undisclosed nerf of Claude, or whether it’s just the initial awe of Opus 4.6 fading and people increasingly noticing its mistakes.

Just one more anecdote:

I'm on the enterprise team plan so a decent amount of usage.

In March I could use Opus all day and it was getting great results.

Since the last week of March and into April, I've had sessions where I maxed out session usage under 2 hours and it got stuck in overthinking loops, multiple turns of realising the same thing, dozens of paragraphs of "But wait, actually I need to do x" with slight variations of the same realisation.

This is not the 'thinking effort' setting in claude code, I noticed this happening across multiple sessions with the same thinking effort settings, there was clearly some underlying change that was not published that made the model get stuck in thinking loops more for longer and more often without any escape hatch to stop and prompt the user for additional steering if it gets stuck.

  • Whenever I see Opus say “but wait, …”—which is all the time—I get a little bit closer toward throwing my computer out the window. Sometimes I just collapse the thinking section, cross my fingers, and wait for the answer. It’s too frustrating watching the thinking process.

    • I stop the thinking and manually correct with explicit instructions or direction. I treat my agents like well meaning ivy-league graduate interns. They lack the experience to know what to do sometimes and need a “common sense” direction every now and then.

    • Have you considered just… writing code? Like we used to in the good old days? If the tool drives you to that point of frustration, maybe it’s time to give the tool a break.

      1 reply →

  • I'm also an enterprise user and this has been my experience exactly. Same asks, same code bases, same models, much worse results. Everyone on my team is expressing the same thing.

    Not only that, but the lack of transparency about what's happening, in clear and simple terms, directly from Anthropic is concerning.

    I've already told my org's higher ups that in the current situation we're not close to getting our money's worth with these models.

  • I’ve seen the point raised elsewhere that this could be the double usage promo that was available from the 13th of March to the 28th. ie. people getting used to the promo then feeling impacted when it finished.

    Although it seems that enterprise wasn’t included, so maybe not in your case.

    https://support.claude.com/en/articles/14063676-claude-march...

    • its sounds like, tinfoil hat, they reduced the quant size of their model and tried to mask the change with the promo. your theory only addresses the spend not the reduced realiability

    • To me, doubling session usage always seemed like a way to gaslight users into thinking their perception of smaller usage limits after that period ended was just them readjusting to the normal usage limits. Whether from a different model being used or an intentional reduction in weekly usage, I've noticed a difference.

  • this timing matches my experience, enterprise plan, but using opus from vscode - finished a heavy refactor of a large C# codebase mid march, tried to do basically the same thing early april and couldn't

  • It's probably because you didn't specify "make no mistakes" /s

    In all seriousness though, I've observed the same thing with my own usage.

I think there's a much more nefarious reason that you're missing.

It's pretty clear that OpenAI has consistently used bots on social networks to peddle their products. This could just be the next iteration, mass spreading lies about Anthropic to get people to flock back to their own products.

That would explain why a lot of users in the comments of those posts are claiming that they don't see any changes to limits.

  • The trouble with that argument, though, is that it works the other way as well: how do I, a random internet citizen, know that you're not doing the same thing for Anthropic with this comment?

    (FWIW I have definitely noticed a cognitive decline with Claude / Opus 4.6 over the past month and a half or so, and unless I'm secretly working for them in my sleep, I'm definitely not an Anthropic employee.)

    • Oh it's pretty clear to me that Anthropic employs the same tactics and uses bots on socials to push its products too. On Reddit a couple of months ago it was simply unbearable with all the "Claude Opus is going to take all the jobs".

      You definitely shouldn't trust me, as we're way beyond the point where you can trust ANYTHING on the internet that has a timestamp later than 2021 or so (and even then, of course people were already lying).

      Personally I use Claude models through Bedrock because I work for Amazon, and I haven't noticed any decline. Instead it's always been pretty shit, and what people describe now as the model getting lost of infinite loops of talking to itself happened since the very start for me.

  • Judging from the number of GitHub issues on Anthropic, shamelessly being dismissed as "fixed", I doubt openai needs the bots to tarnish that competitor.