Comment by dchftcs

3 days ago

Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.

In that case though, why the lock-in? If the combination really does have better performance than competitors’ offerings, then Anthropic should encourage an open ecosystem, confident in winning the comparison.

  • I imagine they do not see it as a level playing field. If OpenCode can draw on Claude Code credits but cannot draw on Codex ones (we've just had a tweet promising to fix this, more or less), then it can be construed as an advantage on the part of OpenAI. Personally I think it's idiotic and companies should stop penny-pinching in situations where people are already paying $200, there can be no more value extraction at this price point.