Comment by mike_hearn
21 hours ago
It's because the model companies believe there's no way to survive just selling a model via an API. That is becoming a low margin, undifferentiated commodity business that can't yield the large revenue streams they need to justify the investments. The differences between models just aren't large enough and the practice of giving model weights away for free (dumping) is killing that business.
So they all want to be product companies. OpenAI is able to keep raising crazy amounts of capital because they're a product company and the API is a sideshow. Anthropic got squeezed because Altman launched ChatGPT first for free and immediately claimed the entire market, meaning Anthropic became an undifferentiated Bing-like also-ran until the moment they launched Claude Code and had something unique. For consumer use Claude still languishes but when it comes to coding and the enormous consumption programmers rack up, OpenAI is the one cloning Claude Code rather than the other way around.
For Claude Code to be worth anything to Anthropic's investors it must be a product and not just an API pricing tier. If it's a product they have so many more options. They can e.g. include ads, charge for corporate SSO integrations, charge extra for more features, add social features... I'm sure they have a thousand ideas, all of which require controlling the user interface and product surface.
That's the entire reason they're willing to engage in their own market dumping by underpricing tokens when consumed via their CLI/web tooling: build up product loyalty that can then be leveraged into further revenue streams beyond paying for tokens. That strategy doesn't work if anyone can just emulate the Claude Code CLI at the wire level. It'd mean Anthropic buys market share for their own competitors.
N.B. this kind of situation is super common in the tech industry. If you've ever looked at Google's properties you'll discover they're all locked behind Javascript challenges that verify you're using a real web browser. The features and pricing of the APIs is usually very different to what consumers can access via their web browser and technical tricks are used to segment that market. That's why SERP scraping is a SaaS (it's far too hard to do directly yourself at scale, has to be outsourced now), and why Google is suing them for bypassing "SearchGuard", which appears to just be BotGuard rebranded. I designed the first version of BotGuard and the reason they use it on every surface now, and not just for antispam, is because businesses require the ability to segment API traffic that might be generated by competitors from end user/human traffic generated by their own products.
If Anthropic want to continue with this strategy they'll need to do the same thing. They'll need to build what is effectively an anti-abuse team similar to the BotGuard team at Google or the VAC team at Valve, people specialized in client integrity techniques and who have experience in detecting emulators over the network.
No comments yet
Contribute on Hacker News ↗