Comment by observationist
2 days ago
You have to take into consideration the source. FT is part of the Anthropic circle of media outlets and financial ties. It benefits them to create a draft of support to OpenAI competition, primarily Anthropic, but they(FT) also have deep ties to Google and the adtech regime.
They benefit from slowing and attacking OpenAI because there's no clear purpose for these centralized media platforms except as feeds for AI, and even then, social media and independents are higher quality sources and filters. Independents are often making more money doing their own journalism directly than the 9 to 5 office drones the big outlets are running. Print media has been on the decline for almost 3 decades now, and AI is just the latest asteroid impact, so they're desperate to stay relevant and profitable.
They're not dead yet, and they're using lawsuits and backroom deals to insert themselves into the ecosystem wherever they can.
This stuff boils down to heavily biased industry propaganda, subtly propping up their allies, overtly bashing and degrading their opponents. Maybe this will be the decade the old media institutions finally wither up and die. New media already captures more than 90% of the available attention in the market. There will be one last feeding frenzy as they bilk the boomers as hard as possible, but boomers are on their last hurrah, and they'll be the last generation for whom TV ads are meaningfully relevant.
Newspapers, broadcast TV, and radio are dead, long live the media. I, for one, welcome our new AI overlords.
All of which is great theory without any kind of evidence? Whereas the evidence pretty clearly shows OpenAI is losing tons of money and the revenue is not on track to recover it?
Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.
If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.
At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.
If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.
This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.
They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.
When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.
The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.
ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.
Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.
All of which is more theory. Of course nobody can predict the future. Your argument is essentially “they have enough money and enough ability to attract more that they’ll figure it out,” just like Amazon did, who were also famously unprofitable but could “turn it on at any time.”
FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”
Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.
I think it’s quite obvious we’re in a bubble right now. At some point, those pop.
The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?
I don’t think anybody is arguing it’s Pets.com.
That's a fabulous tale you've told (the notion that there's a bunch of Anthropic leaning sites is my personal favourite) but alas, the article is reporting on a GSBC report which they are justifiably sceptical if, and does not in any way, shape or form represent the FTs beliefs.
AI can both be a transformative technology and the economics may also not make sense.