← Back to context

Comment by bluelightning2k

2 months ago

Reading these comments aren't we missing the obvious?

Claude Code is a lock in, where Anthropic takes all the value.

If the frontend and API are decoupled, they are one benchmark away from losing half their users.

Some other motivations: they want to capture the value. Even if it's unprofitable they can expect it to become vastly profitable as inference cost drops, efficiency improves, competitors die out etc. Or worst case build the dominant brand then reduce the quotas.

Then there's brand - when people talk about OpenCode they will occasionally specify "OpenCode (with Claude)" but frequently won't.

Then platform - at any point they can push any other service.

Look at the Apple comparison. Yes, the hardware and software are tuned and tested together. The analogy here is training the specific harness,caching the system prompt, switching models, etc.

But Apple also gets to charge Google $billions for being the default search engine. They get to sell apps. They get to sell cloud storage, and even somehow a TV. That's all super profitable.

At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.

Anthropic is going to be on the losing side with this. Models are too fungible, it's really about vibes, and Claude Code is far too fat and opinionated. Ironically, they're holding back innovation, and it's burning the loyalty the model team is earning.

  • I think you have it exactly backwards, and that "owning the stack" is going to be important. Yes the harness is important, yes the model is important, but developing the harness and model together is going to pay huge dividends.

    • https://mariozechner.at/posts/2025-11-30-pi-coding-agent/

      This coding agent is minimal, and it completely changed how I used models and Claude's cli now feels like extremely slow bloat.

      I'd not be surprised if you're right in that this is companies / management will prefer to "pay for a complete package" approach for a long while, but power-users should not care for the model providers.

      I have like 100 lines of code to get me a tmux controls & semaphore_wait extension in the pi harness. That gave me a better orchestration scheme a month ago when I adopted it, than Claude has right now.

      As far as I can tell, the more you try to train your model on your harness, the worse they get. Bitter lesson #2932.

      3 replies →

    • That was true more mid last year, but now we have a fairly standard flow and set of core tools, as well as better general tool calling support. The reality is that in most cases harnesses with fewer tools and smaller system prompts outperform.

      The advances in the Claude Code harness have been more around workflow automation rather than capability improvements, and truthfully workflows are very user-dependent, so an opinionated harness is only ever going to be "right" for a narrow segment of users, and it's going to annoy a lot of others. This is happening now, but the sub subsidy washes out a lot of the discontent.

    • You're right, because owning the stack means better options for making tons of money. Owning the stack is demonstrably not required for good agents, there are several excellent (frankly way better than ol' Claude Code) harnesses in the wild (which is in part why so many people are so annoyed by Anthropic about this move - being forced back onto their shitty cli tool).

  • the fat and opinionated has always been true for them (especially compared to openai), and to all appearances remains a feature rather than a bug. i can’t say the approach makes my heart sing, personally, but it absolutely has augured tremendous success among thought workers / the intelligensia

  • I think their branding is cementing in place for a lot of people, and the lived experience of people trying a lot of models often ends up with a simple preference for Claude, likely using a lot of the same mental heuristics as how we choose which coworkers we enjoy working with. If they can keep that position, they will have it made.

    • I'm a very experienced developer with a lot of diverse knowledge and experience in both technical and domain knowledge. I've only tried a handful of AI coding agents/models... I found most of them ranging from somewhat annoying to really annoying. Claude+Opus (4.5 when I started) is the first one I've used where I found it more useful than annoying to use.

      I think Github Co-Pilot is most annoying from what I've tried... it's great for finishing off a task that's half done where the structure is laid out, as long as you put blinders keeping it focused on it. OpenAI and Google's options seem to get things mostly right, but do some really goofy wrong things from my own experiences.

      They all seem to have trouble using state of the art and current libraries by default, even when you explicitly request them.

      3 replies →

  • The competition angle is interesting - we're already seeing models like Step-3.5-Flash advertise compatibility with Claude Code's harness as a feature. If Anthropic's restrictions push developers toward more open alternatives, they might inadvertently accelerate competitor adoption. The real question is whether the subscription model economics can sustain the development costs long-term while competitors offer more flexible terms.

I don't think many are confused about why Anthropic wants to do this. The crux is that they appear to be making these changes solely for their own benefit at the expense of their users and people are upset.

There are parallels to the silly Metaverse hype wave from a few years ago. At the time I saw a surprising number of people defending the investment saying it was important for Facebook to control their own platform. Well sure it's beneficial for Facebook to control a platform, but that benefit is purely for the company and if anything it would harm current and future users. Unsurprisingly, the pitch to please think of this giant corporation's needs wasn't a compelling pitch in the end.

"Training the specific harness" is marginal -- it's obvious if you've used anything else. pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude.

This whole game is a bizarre battle.

In the future, many companies will have slightly different secret RL sauces. I'd want to use Gemini for documentation, Claude for design, Codex for planning, yada yada ... there will be no generalist take-all model, I just don't believe RL scaling works like that.

I'm not convinced that a single company can own the best performing model in all categories, I'm not even sure the economics make it feasible.

Good for us, of course.

  • > pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude

    And that’s out of the box. With how comically extensible pi is and how much control it gives you over every aspect of the pipeline, as soon as you start building extensions for your own, personal workflow, Claude Code legimitely feels like a trash app in comparison.

    I don’t care what Anthropic does - I’ll keep using pi. If they think they need to ban me for that, then, oh well. I’ll just continue to keep using pi. Just no longer with Claude models.

    • As a Claude Code user looking for alternatives, I am very intrigued by this statement.

      Can you please share good resources I can learn from to extend pi?

      1 reply →

Don't think that's a valid comparison.

Apple can do those things because they control the hardware device, which has physical distribution, and they lock down the ecosystem. There is no third party app store, and you can't get the Photos app to save to Google Drive.

With Claude Code, just export an env variable or use a MITM proxy + some middleware to forward requests to OpenAI instead. It's impossible to have lock in. Also, coding agent CLIs are a commodity.

> At some point Claude Code will become an ecosystem with preferred cloud and database vendors, observability, code review agents, etc.

i've been wondering how anthropic is going to survive long term. If they could build out an infrastructure and services to complete with the hyperscalers but surfaced as a tool for claude to use then maybe. You pay Anthropic $20/user/month for ClaudeCode but also $100k/month to run your applications.

>Claude Code is a lock in, where Anthropic takes all the value.

I wouldn't all the value, but how else are you going to run the business? Allow other to take all the value you provide?

???

Use an API Key and there's no problem.

They literally put that in plain words in the ToS.

  • Using an API key is orders of magnitude more expensive. That's the difference here. The Claude Code subscriptions are being heavily subsidized by Anthropic, which is why people want to use their subscriptions in everything else.

    • Be the economics as they may, there is no lock in as OP claims.

      This statement is plainly wrong.

      If you boost and praise AI usage, you have to face the real cost.

      Can't have your cake and eat it, too.

  • The people mad about this feel they are entitled to the heavily subsidized usage in any context they want, not in the context explicitly allowed by the subsidizer.

    It's kind of like a new restaurant started handing out coupons for "90% off", wanting to attract diners to the restaurant, customers started coming in and ordering bulk meals then immediately packaging them in tupperware containers and taking it home (violating the spirit of the arrangement, even if not the letter of the arrangement), so the restaurant changed the terms on the discount to say "limited to in-store consumption only, not eligible for take-home meals", and instead of still being grateful that they're getting food for 90% off, the cheapskate customers are getting angry that they're no longer allowed to exploit the massive subsidy however they want.

> Reading these comments aren't we missing the obvious?

AI companies: "You think you own that code?"