← Back to context

Comment by observationist

2 days ago

Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.

If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.

At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.

If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.

This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.

They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.

When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.

The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.

ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.

Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.

All of which is more theory. Of course nobody can predict the future. Your argument is essentially “they have enough money and enough ability to attract more that they’ll figure it out,” just like Amazon did, who were also famously unprofitable but could “turn it on at any time.”

FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”

Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.

I think it’s quite obvious we’re in a bubble right now. At some point, those pop.

The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?

I don’t think anybody is arguing it’s Pets.com.

That's a fabulous tale you've told (the notion that there's a bunch of Anthropic leaning sites is my personal favourite) but alas, the article is reporting on a GSBC report which they are justifiably sceptical if, and does not in any way, shape or form represent the FTs beliefs.

AI can both be a transformative technology and the economics may also not make sense.