Comment by tumdum_
20 days ago
> On top of that, Anthropic is losing money on it.
It seems they are *not* losing money on inference: https://bsky.app/profile/steveklabnik.com/post/3mdirf7tj5s2e
20 days ago
> On top of that, Anthropic is losing money on it.
It seems they are *not* losing money on inference: https://bsky.app/profile/steveklabnik.com/post/3mdirf7tj5s2e
no, and that is widely known. the actual problem is that the margins are not sufficient at that scale to make up for the gargantuan training costs to train their SOTA model.
They are large enough to cover their previous training costs but not their next gen training costs.
i.e They made more money on 3.5 than 3.5 cost to train, but didn't make enough money on 3.5 to train 4.0.
Source on that?
Because inference revenue is outpacing training cost based on OpenAI’s report and intuition.
Net inference revenue would need to be outpacing to go against his think about margins.
That's for the API right? The subs are still a loss. I don't know which one of the two is larger.