Comment by dvfjsdhgfv
6 months ago
> If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their user base you think they don't have a successful business model?
Actually, I'd be very curious to know this. Because we already have a few relatively capable models that I can run on my MBP with 128 GB of RAM (and a few less capable models I can run much faster on my 5090).
In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.
But the cynic in me feels they prefer to avoid this reality check and use the tried and tested Uber model of permanent money influx with the "profitability is just around the corner" justification but at an even bigger scale.
> In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.
Is that true? Are they operating inference at a loss or are they incurring losses entirely on R&D? I guess we'll probably never know, but I wouldn't take as a given that inference is operating at a loss.
I found this: https://semianalysis.com/2023/02/09/the-inference-cost-of-se...
which estimates that it costs $250M/year to operate ChatGPT. If even remotely true $10B in revenue on $250M of COGS would be a great business.
As you say, we will never know, but this article[0] claims:
> The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.
[0] https://www.lesswrong.com/posts/CCQsQnCMWhJcCFY9x/openai-los...
CapEx vs. OpEx.
If they stop training today what happens? Does training always have to be at these same levels or will it level off? Is training fixed? IE, you can add 10x the subs and training costs stay static.
IMO, there is a great business in there, but the market will likely shrink to ~2 players. ChatGPT has a huge lead and is already Kleenex/Google of the LLMs. I think the battle is really for second place and that is likely dictated by who runs out of runway first. I would say that Google has the inside track, but they are so bad at product they may fumble. Makes me wonder sometimes how Google ever became a product and verb.
1 reply →
Obviously you don't need to train new models to operate existing ones.
I think I trust the semianalysis estimate ($250M) more than this estimate ($2B), but who knows? I do see my revenue estimate was for this year, though. However, $4B revenue on $250M COGS...is still staggeringly good. No wonder amazon, google, and Microsoft are tripping over themselves to offer these models for a fee.
5 replies →