Comment by lukev
2 days ago
Claude Code purportedly has over a billion dollars in revenue.
In terms of economic value, coding agents are definitely one of the top-line uses of LLMs.
2 days ago
Claude Code purportedly has over a billion dollars in revenue.
In terms of economic value, coding agents are definitely one of the top-line uses of LLMs.
Sure, I don’t disagree, but a fact remains that 1B is less than 10% of OpenAI’s revenue with ChatGPT and its 700M+ user base.
Coding agents are important, they matter, my comment is that this article isn’t about that, it’s about the other side of the market.
And OprnAI will never be worth its current valuation or be able to keep its spending commitments based on $20/month subscriptions
Anyone can sell dollar bills for 90 cents. When they can actually make a profit, then it will be impressive.
But that’s not what they’re doing, sir.
Are they profitable?
Reminder that the entire AI industry is loaning itself money to boost revenue.
I seriously question any revenue figures that tech companies are reporting right now. Nobody should be believing anything they say at this time. Fraud is rampant and regulation is non-existent.
On a purely theoretical-finance level, I don't think the circular funding is actually a problem in itself. It's analogous to fractional reserve banking.
Whether there's also fraud, misreporting of revenue, or other misbehaviour of weird and wonderful classifications that will keep economics history professors in papers for decades is a separate question. I just find that people get fixated on this one structural feature and I think it's a distraction. It might be smoke, but it's not the fire.
Doesn't fractional reserve banking depend upon independence of the various customers? The widely-reported circular financing between AI players does not enjoy that.
Claude has been measurably worse over other models, in my experience. This alone makes me doubt the number. That and Anthropic has not released official public financial statements, so I'll just assume it's the same kind of hand waving heavily leveraged companies tend to do.
I actually for for ChatGPT and my company pays for Copilot (which is meh).
Edit: Given other community opinions, I don't feel I'm saying anything controversial. I have noted HN readers tend to be overly bullish on it for some reason.
That doesn’t reflect my (I would say extensive) experience at this point, nor does it reflect the benchmarks. (I realize benchmarks have issues.)
Are you using Claude as an agent in VSCode or via Claude Code, or are you asking questions in the web interface? I find Claude is the best model when it’s working with a strongly typed language with a verbose linter and compiler. It excels with Go and TypeScript in Cursor.
I have used it for GDScript, C++, Java, and other more general questions. Specifically, comparing it to other LLMs responses ESPECIALLY after incremental narrowing by prompt. Claud seems to randomly change approaches and even ignore context to the point you get the same circular issues you see in Copilot (do A because B is bad, then do B because A is bad or worse ignore everything before and do C because it's nominal). It seems more primitive in my sessions from the last time I used it (for a couple days) ~45 days ago.