Comment by victorbjorklund
12 hours ago
Anthropic is really trying to burn all that goodwill they worked up by raising prices, reducing limits and making it impossible to know what the actual policies are.
12 hours ago
Anthropic is really trying to burn all that goodwill they worked up by raising prices, reducing limits and making it impossible to know what the actual policies are.
Boiling the frog is an art form. You've got to know when to turn up the heat and when to let it simmer.
Don’t know, I feel like I’ve watched every tech company get through every controversy without consequence.
Google when they merged YouTube and Google+, Reddit multiple times, Facebook after countless scandals. Microsoft destroying windows and pushing ads.
At the end of the day a solid product and company can withstand online controversy.
> a solid product and company can withstand online controversy
A product with a massive moat. Switching from Claude to another competitor is insanely easy and without much loss of quality. Until they’ve built their moat, burning goodwill is foolish.
What’s different is it’s probably required due to the cash that’s being burnt to operate. They can’t afford to keep offering so much for so little revenue.
At the end of the day, unenforced anti-competition regulations can bulldoze controversies.
Hormussy started it.
If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them. And we still aren't there (or even close).
They are taking in more than they are spending hosting them. However, the cost for training the next generation of models is not covered.
Nope. They're losing money on straight inference (you may be thinking of the interview where Dario described a hypothetical company that was positive margin). The only way they can make it look like they're making money on inference is by calling the ongoing reinforcement training of the currently-served model a capital rather than operational expense, which is both absurd and will absolutely not work for an IPO.
3 replies →
The open models may not be as great but maybe these are good enough. AI users can switch when the prices rise before it becomes sustainable for (some) of the large LLM providers.
Currently it costs so much more to host an open model than it costs to subscribe to a much better hosted model. Which suggests it’s being massively subsidised still.
12 replies →
It is nobody's responsibility to ensure billion dollar companies are profitable. Use them until local models are good enough
I'll take local models over these corporate ones any day of the week. Hopefully it's only a matter of time
I think this has to be done with technological advances that makes things cheaper, not charging more.
I understand why they have to charge more, but not many are gonna be able to afford even $100 a month, and that doesn't seem to be sufficient.
It has to come with some combination of better algorithms or better hardware.
Making it more affordable would be very bad news for Amazon, who are now counting on $100B in new spending from OpenAI over the next 10 years.
7 replies →
They probably aren’t planning on making the money on consumer subscriptions. Any price is viable as long as the user can get more value out of it than they spend.
1 reply →
If they started doing caching properly and using proper sunrooms for that they'd have a better chance with that
If my empty plate had a pizza on it it would be a good lunch
I see the current situation as a plus. I get SOTA models for dumping prices. And once the public providers go up with their pricing, I will be able to switch to local AI because open models have improved so much.
Like with all new products. It takes time to let the market do its work. See if from a positive side. The demand for more and faster and bigger hardware is finally back after 15 years of dormancy. Finally we can see 128gb default memory or 64gb videocards in 2 years from now.
[dead]
Would you please think of the shareholders
What shareholders, Anthropic is a money burning pit. Not to the same extent as OpenAI, but both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received.
Not that they don't bring value, I'm just not convinced they'll be able to sell their products in a sticky enough way to make up the prices they'll have to extract to make up for the absurd costs.
>> both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received.
I'd agree with you, except I've heard this argument before. Amazon, Google, Facebook all burned lots of cash, and folks were convinced they would fail.
On the other hand plenty burned cash and did fail. So could go either way.
I expect, once the market consolidates to 2 big engines, they'll make bonkers money. There will be winners and losers. But I can't tell you which is which yet.
1 reply →
$20B ARR or so reported added in Q1 doesn’t sound particularly bad, they’ll raise effective prices some more while Claude diffuses into the economy, sounds like a money printer. The issue is they’re compute constrained on the supply side to grow faster…
5 replies →
It's almost like they want me to switch to the Chinese clones - which they consider malicious actors.
[dead]
Aren't they just doing what Hacker News was trying to tell them to do? That AI is useful but not sure if sustainable. Now they're increasing prices and decreasing tokens and you guys are pissed off.
I feel this has to be said constantly, though I hate doing it.
hn is not a monolith. People here routinely disagree with each other, and that's what makes it great
I'm aware. When I say "Hacker News", I mean a very sizable portion of users who keep repeating the OpenAI collapse imminent opinion.