Comment by JohnMakin
2 hours ago
I don't know if ed realizes that criticizing the capacity issues of anthropic directly contradicts his early, fierce take he'd had since early on (and is slowly retreating from) that they would not be having capacity issues unless people found this tool useful enough to cause the capacity issues in the first place. He claimed up to a few months ago that these tools were not useful, would never be useful, and people weren't actually using them to do anything useful, just as fun toys. Clearly, in the case of anthropic, this isn't true, as the enterprise side is growing so rapidly - the only way you could say this is bs is if you think the entire tech sector is experiencing delusion about their usefulness, which he used to say, but doesn't anymore.
I think he's just locked in to this performance. He raises a lot of interesting points in his articles, but gets a little over his skis when he conflates the absurdity of particulars with long term death of the entire industry (people thought the same thing about .COM and look where we ended up). The applications of this technology are both more immediately applicable and relatable versus some recent bubbles.
Textbook audience capture: once you've built your entire brand on "AI is a scam", you can't back down even if the facts change, because your paying customers are there for that take— the demand for "AI is a scam" takes is almost as frenzied as the demand for AI compute right now— and going back on it literally threatens your ability to pay the mortgage.
This is a big reason why I'm none too happy about the rise of individual-authors-as-brands via Substack replacing traditional journalism: once people are following and paying you specifically for a specific opinion, you're locked in. A very small number of individual bloggers have a brand of "I'll say the truth no matter what" and actually mean it, but the overwhelming majority are like Ed.
"A lot of people use this tool" and "this tool is productive and useful" don't necessarily track. People do and use all kinds of things that aren't actually useful, they just feel useful, and I think that's the argument most AI critics would make.
Companies don’t generally torch money on things that don’t improve their bottom line at a scale like this. Yes you can point to saas bloat and crappy tools like jira, but this isn’t some trend thing anymore. I know because I’ve been personally seeing it play out.
There was an argument to be made that bigger companies with skin in the game were and are making a bigger deal than they should have about how useful it is, but enterprise is growing way, way beyond those companies and it can’t really be easily explained away.
Like this:
> And really, why do these capacity constraints not seem to have any effect on its revenue growth?
Is either absolute obtuseness on purpose, or completely dishonest. Obviously the answer is people find it worth the spend even with the capacity issues. What other answer could there be? To ed, he will say they are simply lying about their revenue and doing accounting tricks, which is a crazy claim to make with no evidence.
when will this show up in revenue and profit numbers or these enterprises ?
otherwise its just "fun toys" right?
I'm not sure what you mean. Anthropic in particular's revenue is exploding, driven by enterprise demand. There's no real evidence to claim otherwise.
Is Anthropic profitable and/or profitability on the horizon?
3 replies →
> the enterprise side is growing so rapidly - the only way you could say this is bs is if you think the entire tech sector is experiencing delusion about their usefulness,
i am talking about revnue/profit growth of ai consumers ( not producers). if they are making anything useful or improving productivity surely it would it show up in numbers right? otherwise whats the point.
usefulness isnt a feeling.
btw. top two links on hn right now are
> Appearing Productive in The Workplace
> The bottleneck was never the code