← Back to context

Comment by alonsonic

6 months ago

I'm confused with your second point. LLM companies are not making any money from current models? Openai generates 10b USD ARR and has 100M MAUs. Yes they are running at a loss right now but that's because they are racing to improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their massive user base you think they don't have a successful business model? People use this tools daily, this is inevitable.

They might generate 10b ARR, but they lose a lot more than that. Their paid users are a fraction of the free riders.

https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...

  • This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s. LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token. Plus, the question at hand is "will LLMs be relevant?" and not "will LLMs be massively profitable to model providers?"

    • Social networks finding profitability via advertising is what created the entire problem space of social media - the algorithmic timelines, the gaming, the dopamine circus, the depression, everything negative that’s come from social media has come from the revenue model, so yes, I think it’s worth being concerned about how LLMs make money, not because I’m worried they won’t, because I’m worried they Will.

      5 replies →

    • > This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s.

      The difference is that Facebook costs virtually nothing to run, at least on a per-user basis. (Sure, if you have a billion users, all of those individual rounding errors still add up somewhat.)

      By contrast, if you're spending lots of money per user... well look at what happened to MoviePass!

      The counterexample here might be Youtube; when it launched, streaming video was really expensive! It still is expensive too, but clearly Google has figured out the economics.

      9 replies →

    • > This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?"

      The answer was, and will be ads (talk about inevitability!)

      Can you imagine how miserable interacting with ad-funded models will be? Not just because of the ads they spew, but also the penny-pinching on training and inference budgets, with an eye focused solely on profitability. That is what the the future holds: consolidations, little competition, and models that do the bare-minimum, trained and operated by profit-maximizing misers, and not the unlimited intelligence AGI dream they sell.

      6 replies →

    • The thing about facebook/twitter/etc was that everyone knew how they achieve lock-in and build a moat (network effect), but the question was around where to source revenue.

      With LLMs, we know what the revenue source is (subscription prices and ads), but the question is about the lock-in. Once each of the AI companies stops building new iterations and just offers a consistent product, how long until someone else builds the same product but charges less for it?

      What people often miss is that building the LLM is actually the easy part. The hard part is getting sufficient data on which to train the LLM, which is why most companies just put ethics aside and steal and pirate as much as they can before any regulations cuts them off (if any regulations ever even do). But that same approach means that anyone else can build an LLM and train on that data, and pricing becomes a race to the bottom, if open source models don't cut them out completely.

      5 replies →

    • Yep. Remember when Amazon could never make money and we kept trying to explain they were reinvesting their earnings into R&D and nobody believed it? All the rhetoric went from "Amazon can't be profitable" to "Amazon is a monopoly" practically overnight. It's like people don't understand the explore/exploit strategy trade-off.

      2 replies →

    • > LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token

      That we might come to companies saying "it's not worth continuing research or training new models" seems to reinforce the OP's point, not contradict it.

      2 replies →

    • No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

      Twitter has never been consistently profitable.

      ChatGPT also has higher marginal costs than any of the software only tech companies did previously.

    • Well, given the answers to the former: maybe we should stop now before we end up selling even more of our data off to technocrats. Or worse, your chatbot shilling to you between prompts.

      And yes these are still businesses. If they can't find profitability they will drop it like it's hot. i.e. we hit another bubble burst that tech is known to do every decade or 2. There's no free money anymore to carry them anymore, so perfect time to burst.

    • what I struggle with is that the top 10 providers of LLMs all have identical* products. The services have amazing capabilities, but no real moats.

      The social media applications have strong network effects, this drives a lot of their profitability.

      * sure, there are differences, see the benchmarks, but from a consumer perspective, there's no meaningful differentiation

      2 replies →

    • The point is that if they’re not profitable they won’t be relevant since they’re so expensive to run.

      And there was never any question as to how social media would make money, everyone knew it would be ads. LLMs can’t do ads without compromising the product.

      14 replies →

    • No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

      Twitter has never been consistently profitable

  • That's fixable, a gradual adjusting of the free tier will happen soon enough once they stop pumping money into it. Part of this is also a war of attrition though, who has the most money to keep a free tier the longest and attract the most people. Very familiar strategy for companies trying to gain market share.

    • Absolutely, free-tier AI won’t stay "free" forever. It’s only a matter of time before advertisers start paying to have their products woven into your AI conversations. It’ll creep in quietly—maybe a helpful brand suggestion, a recommended product "just for you," or a well-timed promo in a tangential conversation. Soon enough though, you’ll wonder if your LLM genuinely likes that brand of shoes, or if it's just doing its job.

      But hey, why not get ahead of the curve? With BrightlyAI™, you get powerful conversational intelligence - always on, always free. Whether you're searching for new gear, planning your next trip, or just craving dinner ideas, BrightlyAI™ brings you personalized suggestions from our curated partners—so you save time, money, and effort.

      Enjoy smarter conversations, seamless offers, and a world of possibilities—powered by BrightlyAI™: "Illuminate your day. Conversation, curated."

    • Competition is almost guaranteed to drive price close to cost of delivery especially if they can't pay trump to ban open source, particularly chinese. With no ability to play the thiel monopoly playbook, their investors would never make their money back if not for government capture and sweet sweet taxpayer military contracts.

      2 replies →

Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

  • They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

    And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

    • Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!

      1 reply →

    • If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)

      Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.

    • I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.

      14 replies →

    • and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.

      Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.

  • That's calculating value against not having LLMs and current competitors. If they stopped improving but their competitors didn't, then the question would be the incremental cost of Claude (financial, adjusted for switching costs, etc) against the incremental advantage against the next best competitor that did continue improving. Lock in is going to be hard to accomplish around a product that has success defined by its generalizability and adaptability.

    Basically, they can stop investing in research either when 1) the tech matures and everyone is out of ideas or 2) they have monopoly power from either market power or oracle style enterprise lock in or something. Otherwise they'll fall behind and you won't have any reason to pay for it anymore. Fun thing about "perfect" competition is that everyone competes their profits to zero

  • But if Claude stopped pouring their money into research and others didn't, Claude wouldn't be useful a year from now, as you could get a better model for the same price.

    This is why AI companies must lose money short term. The moment improvements plateau or the economic environment changes, everyone will cut back on research.

  • For me, if Anthropic stopped now, and given access to all alternative models, they still would be worth exactly $240 which is the amount I'm paying now. I guess Anthropic and OpenAI can see the real demand by clearly seeing what are their free:basic:expensive plan ratios.

  • > Well worth 4 figures a year IMO

    only because software engineering pay hasn't adjusted down for the new reality . You don't know what its worth yet.

    • Can you explain this in more detail? The idiot bottom rate contractors that come through my team on the regular have not been helped at all by LLMs. The competent people do get a productivity boost though.

      The only way I see compensation "adjusting" because of LLMs would need them to become significantly more competent and autonomous.

      11 replies →

    • I mean, it adjusted down by having some hundreds of thousands of engineers laid off in he last 2+ years. they know slashing salaries is legal suicide, so they just make the existing workers work 3x as hard.

> If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their user base you think they don't have a successful business model?

Actually, I'd be very curious to know this. Because we already have a few relatively capable models that I can run on my MBP with 128 GB of RAM (and a few less capable models I can run much faster on my 5090).

In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.

But the cynic in me feels they prefer to avoid this reality check and use the tried and tested Uber model of permanent money influx with the "profitability is just around the corner" justification but at an even bigger scale.

  • > In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.

    Is that true? Are they operating inference at a loss or are they incurring losses entirely on R&D? I guess we'll probably never know, but I wouldn't take as a given that inference is operating at a loss.

    I found this: https://semianalysis.com/2023/02/09/the-inference-cost-of-se...

    which estimates that it costs $250M/year to operate ChatGPT. If even remotely true $10B in revenue on $250M of COGS would be a great business.

    • As you say, we will never know, but this article[0] claims:

      > The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.

      [0] https://www.lesswrong.com/posts/CCQsQnCMWhJcCFY9x/openai-los...

      8 replies →

Revenue is _NOT_ Profit

  • And ARR is not revenue. It's "annualized recurring revenue": take one month's worth of revenue, multiply it by 12--and you get to pick which month makes the figures look most impressive.

  • It's a good point. Any business can get revenue by selling Twenty dollar bills for $19. But in the history of tech, many winners have been dismissed for lack of an apparent business model. Amazon went years losing money, and when the business stabilized, went years re-investing and never showed a profit. Analysts complained as Amazon expanded into non-retail activities. And then there's Uber.

    The money is there. Investors believe this is the next big thing, and is a once in a lifetime opportunity. Bigger than the social media boom which made a bunch of billionaires, bigger than the dot com boom, bigger maybe than the invention of the microchip itself.

    It's going to be years before any of these companies care about profit. Ad revenue is unlikely to fund the engineering and research they need. So the only question is, does the investor money dry up? I don't think so. Investor money will be chasing AGI until we get it or there's another AI winter.

> that's because they are racing improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their user base you think they don't have a successful business model?

I imagine they would’ve flicked that switch if they thought it would generate a profit, but as it is it seems like all AI companies are still happy to burn investor money trying to improve their models while I guess waiting for everyone else to stop first.

I also imagine it’s hard to go to investors with “while all of our competitors are improving their models and either closing the gap or surpassing us, we’re just going to stabilize and see if people will pay for our current product.”

  • > I also imagine it’s hard to go to investors with “while all of our competitors are improving their models and either closing the gap or surpassing us, we’re just going to stabilize and see if people will pay for our current product.”

    Yeah, no one wants to be the first to stop improving models. As long as investor money keeps flowing in there's no reason to - just keep burning it and try to outlast your competitors, figure out the business model later. We'll only start to see heavy monetization once the money dries up, if it ever does.

    • Maybe I’m naïve/ignorant of how things are done in the VC world, but given the absolutely enormous amount of money flowing into so many AI startups right now, I can’t imagine that the gravy train is going to continue for more than a few years. Especially not if we enter any sort of economic downturn/craziness from the very inconsistent and unpredictable decisions being made by the current administration

      1 reply →

It’s just the natural counterpart to dogmatic inevitabilism — dogmatic denialism. One denies the present, the other the (recent) past. It’s honestly an understandable PoV though when you consider A) most people understand “AI” and “chatbot” to be synonyms, and B) the blockchain hype cycle(s) bred some deep cynicism about software innovation.

Funny seeing that comment on this post in particular, tho. When OP says “I’m not sure it’s a world I want”, I really don’t think they’re thinking about corporate revenue opportunities… More like Rehoboam, if not Skynet.

  • > most people understand “AI” and “chatbot” to be synonyms

    This might be true (or not), but for sure not on this site.

    • I mean...

        LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them,
      

      The only way one could say such a thing is if they think chatbots are the only real application.

Making money and operating at a loss contradict each other. Maybe someday they’ll make money —but not just yet. As many have said they’re hoping capturing market will position them nicely once things settle. Obviously we’re not there yet.

  • It is absolutely possible for the unit economics of a product to be profitable and for the parent company to be losing money. In fact, it's extremely common when the company is bullish on their own future and thus they invest heavily in marketing and R&D to continue their growth. This is what I understood GP to mean.

    Whether it's true for any of the mainstream LLM companies or not is anyone's guess, since their financials are either private or don't separate out LLM inference as a line item.

No, because if they stop to focus on optimizing and minimizing operating costs, the next competitor over will leapfrog them with a better model in 6-12 months, making all those margin improvements an NPV negative endeavor.

One thing we're seeing in the software engineering agent space right now is how many people are angry with Cursor [1], and now Claude Code [2] (just picked a couple examples; you can browse around these subreddits and see tons of complaints).

What's happening here is pretty clear to me: Its a form of enshittification. These companies are struggling to find a price point that supports both broad market adoption ($20? $30?) and the intelligence/scale to deliver good results ($200? $300?). So, they're nerfing cheap plans, prioritizing expensive ones, and pissing off customers in the process. Cursor even had to apologize for it [3].

There's a broad sense in the LLM industry right now that if we can't get to "it" (AGI, etc) by the end of this decade, it won't happen during this "AI Summer". The reason for that is two-fold: Intelligence scaling is logarithmic w.r.t compute. We simply cannot scale compute quick enough. And, interest in funding to pay for that exponential compute need will dry up, and previous super-cycles tell us that will happen on the order of ~5 years.

So here's my thesis: We have a deadline that even evangelists agree is a deadline. I would argue that we're further along in this supercycle than many people realize, because these companies have already reached the early enshitification phase for some niche use-cases (software development). We're also seeing Grok 4 Heavy release with a 50% price increase ($300/mo) yet offer single-digit percent improvement in capability. This is hallmark enshitification.

Enshitification is the final, terminal phase of hyperscale technology companies. Companies remain in that phase potentially forever, but its not a phase where significant research, innovation, and optimization can happen; instead, it is a phase of extraction. AI hyperscalers genuinely speedran this cycle thanks to their incredible funding and costs; but they're now showcasing very early signals of enshitifications.

(Google might actually escape this enshitification supercycle, to be clear, and that's why I'm so bullish on them and them alone. Their deep, multi-decade investment into TPUs, Cloud Infra, and high margin product deployments of AI might help them escape it).

[1] https://www.reddit.com/r/cursor/comments/1m0i6o3/cursor_qual...

[2] https://www.reddit.com/r/ClaudeAI/comments/1lzuy0j/claude_co...

[3] https://techcrunch.com/2025/07/07/cursor-apologizes-for-uncl...