← Back to context

Comment by robeym

12 hours ago

As a long-term 20x user, Claude has recently felt a lot like using AI for coding a year or so ago. It can't reliably handle basic tasks. I ask for something straightforward and get something subtly wrong, incomplete, or just not workable. I always use the best model available and effort levels maxed, but with all their changes I have to relearn how to make the model perform at best every day, and it seems I can't keep up. It’s not that Claude can’t do impressive things, it clearly can, but the inconsistency on simple, expected behavior makes it hard to use. The downtime is annoying but hasn't been the deciding factor. I’m not waiting it out this time. I’m switching over to Codex, and based on my usage today it looks like I’ll be fine on the 5x plan, so I can drop down and save about $100 a month which is nice. I didn't quite have a grasp on how quickly companies can change for better or worse until Anthropic showed me. I'm surprised at how quickly they brought me from a happily paying max user to not even wanting the lowest paid tiers.

The inconsistency has always been there you’re just noticing it more over time and the models are not really improving at real work in spite of all the new releases and churn.

  • I've used Claude Code from the beginning and the first waves of changes were genuine improvements, with a very steady 4 months in late 2025. But recently, these past 2-3 months, the changes have shifted towards significant and frequently degradations in model performance. I had a very consistent workflow for 4+ months in Claude Code, but this year I've had way more surprises. I'm not sure which tools you use, but Claude Code had a great period of consistency, at least compared to other AI tools.