Comment by famouswaffles
2 days ago
> The value of ChatGPT is not the ChatGPT, it's what ChatGPT produces. It's a middleman...
Spotify and Uber are aggregators with high marginal costs that they do not control. Spotify has to pay labels for every stream; Uber has to pay drivers for every ride. They cannot scale their way out of those costs because they don't own the underlying asset (the music or the labor).
OpenAI is not a middleman; they own the factory. They are "manufacturing" intelligence. Their primary costs are compute and energy. Unlike human labor (Uber) or IP licensing (Spotify), the cost of compute is on a strong deflationary curve. Inference costs have dropped orders of magnitudes in the last couple years while model quality has improved and costs will keep dropping. Gemini's median query costs no more than a google search. LLM inference is already cheap.
> Any idiot off the street can be the most used website on Earth. Easy - go to my website, and I give you free stuff.
If they were only burning cash to give away a free product, you’d be right. But they are reportedly at ~$4B in annualized revenue. That is not "giving away free stuff" to inflate metrics; that is the fastest-growing SaaS product in history.
You are conflating "burning cash to build infrastructure" (classic aggressive scaling, like early Amazon) with "structurally unprofitable unit economics" (MoviePass).
Open AI's unit economics are fine. Inference is cheap enough for ads to be viable enough for profitability as a business today. The costs this article is alluding to ? Open AI don't need to do any of that for tier of models and use-cases they have today. They are trying to build and be able to serve 'AGI', which they project will be orders of magnitudes more costly. If they do manage that, then none of those costs will matter. If they don't, then they can just...not do it. 'AGI' is not necessary for Open AI to be a profitable business.
> But they are reportedly at ~$4B in annualized revenue. That is not "giving away free stuff" to inflate metrics; that is the fastest-growing SaaS product in history.
Right, which is just not very impressive giving how much money they are burning.
> Open AI's unit economics are fine
I disagree, they lose massive amounts of money on every query.
The only way for OpenAI to make money off queries is to make it cost more, but that won't work because they have no moat, and cannot even create a moat because of how LLMs work. Again, the model itself or the interface is worthless, consumers only care about what it produces.
Google, Meta, et al. could trivially overthrow OpenAI in my view. Most users probably wouldn't even notice, because they use other interfaces on top of models.
I also think ads are a dead end. Consumers absolutely will not tolerate advertisements in their LLMs. No student is going to submit an essay which has obvious hints towards Bose making the best speakers. No programmer is going to write code that embeds a Java runtime because Oracle paid for OpenAI ad space. No artist is going to publish art that just so happens to contain lots of references to Coca Cola.
LLM chats are just not like other tools. If Google has ads, they can get in the way, but the core Google thing is not compromised. If an LLM has ads, I can no longer trust ANY of it's output, ever, and it's as good as worthless.
OpenAI might be tempted to do the dark pattern thing and hide their ads as much as possible, but I don't think that will work either. It's just not acceptable for the tool to do that, and I don't think consumers will be stupid enough to fall for it. Already, we are seeing online advertisement rapidly plummet in value due to the sheer volume and amount of scams.
Advertisers don't know that yet, but they will. Google might know it, but they certainly won't say it out loud. I can tell you right now, the average consumer has been so bombarded by shitty ads they've become masterminds. They expertly navigate around them, and elegantly ignore them in their peripheral vision. They know X, Y, Z is a scam. New advertisement mediums shake it up, for a bit, but then those die too. Metrics won't necessarily tell you that, because most users are robots so you wouldn't know.
>Right, which is just not very impressive giving how much money they are burning
They are not burning that much money right now.
>I disagree, they lose massive amounts of money on every query.
Both google and Altman confirm the fact that a median LLM query is no more expensive than a google search. Beyond that, we have multiple third parties with who offer profitable access to open source llms and others. Inference is cheap, there's no doubt about it. They lose money because they have hundreds of millions of weekly active users that are not monetized in any way (no ads, nothing).
>Google, Meta, et al. could trivially overthrow OpenAI in my view.
If they could, it would have happened. Both of these players are stuffing their clones in front of billions of users (android and all of meta's apps), and neither have dented Open AI's growth or relevance. ChatGPT is still the undisputed leader in the consumer llm space. Gemini is a very very distant second, and the rest might as well not even register.
There's a reason edge and bing usage is still minuscule despite microsoft having a chokehold on consumer laptops/computers and setting those as defaults. People need to understand that you don't unseat a leader just by copying them. They wish the could trivially overthrow Open AI, but they actually can't.
>I also think ads are a dead end. Consumers absolutely will not tolerate advertisements in their LLMs.
People have said that about Netflix and countless services that introduced ads. Instead, it quickly became Netflix's most popular tier. The Implentation has to be really obnoxious before people actually care about ads.
>No student is going to submit an essay which has obvious hints towards Bose making the best speakers. No programmer is going to write code that embeds a Java runtime because Oracle paid for OpenAI ad space. No artist is going to publish art that just so happens to contain lots of references to Coca Cola.
I'm sorry but you are making up problems that don't need to exist. You are essentially imagining the llm equivalent of obtrusive pop up ads and I have no idea why. Of course chatgpt won't be doing any of these, that's ridiculous.