← Back to context

Comment by tropicalfruit

7 months ago

reading all the shilling of Claude and GPT i see here often I feel like i'm being gaslighted

i've been using premium tiers of both for a long time and i really felt like they've been getting worse

especially Claude I find super frustrating and maddening, misunderstanding basic requests or taking liberties by making unrequested additions and changes

i really had this sense of enshittification, almost as if they are no longer trying to serve my requests but do something else instead like i'm victim of some kind of LLM a/b testing to see how far I can tolerate or how much mental load can be transferred back onto me

While it's possible that the LLMs are intentionally throttled to save costs, I would also keep in mind that LLMs are now being optimized for new kinds of workflows, like long-running agents making tool calls. It's not hard to imagine that improving performance on one of those benchmarks comes at a cost to some existing features.

I suspect that it may not necessarily be that they're getting objectively _worse_ as much as that they aren't static products. They're constantly getting their prompts/context engines tweaked in ways that surely break peoples' familiar patterns. There really needs to be a way to cheaply and easily anchor behaviors so that people can get more consistency. Either that or we're just going to have to learn to adapt.

Anthropic have stated on the record several times that they do not update the model weights once they have been deployed without also changing the model ID.

  • No, they do change deployed models.

    How can I be so sure? Evals. There was a point where Sonnet 3.5 v2 happily output 40k+ tokens in one message if asked. And one day it started with 99% consistency, outputting "Would you like me to continue?" after a lot fewer tokens than that. We'd been running the same set of evals and so could definitively confirm this change. Googling will also reveal many reports of this.

    Whatever they did, in practice they lied: API behavior of a deployed model changed.

    Another one: Differing performance - not latency but output on the same prompt, over 100+ runs, statistically significant enough to be impossible by random chance - between AWS Bedrock hosted Sonnet and direct Anthropic API Sonnet, same model version.

    Don't take at face value what model providers claim.

    • If they are lying about changing model weights despite keeping the date-stamped model ID the same it would be a monumental lie.

      Anthropic make most of their revenue from paid API usage. Their paying customers need to be able to trust them when they make clear statements about their model deprecation policy.

      I'm going to chose to continue to believe them until someone shows me incontrovertible evidence that this isn't true.

      5 replies →