Comment by Deegy

4 hours ago

They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers.

They are more likely trying to race towards wildly overinflated government contracts because they aren't going to profit how they're currently operating without some of that funny money.

Yeah, but we can self-host them. At this point in the span of it, it's more about infrastructure and compute power to meet demand and Google won because it has many business models, massive cashflow, TPUs, and the infrastructure to build expanding on their current, which would take new companies ~25 years to map out compute, data centers and have a viable, tangible infrastructure all while trying to figure out profits.

I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.

It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO

  • >Yeah, but we can self-host them

    Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?

    Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.

What profit margins?

  • It is unclear. Everyday I seem to read contradictory headlines about whether or not inference is profitable.

    If inference has significant profitability and you're the only game in town, you could do really well.

    But without regulation, as a commodity, the margin on inference approaches zero.

    None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.

    • Probably it's "operationally profitable" when ignoring capex, depreciation, dilution and other required expenses to stay current.

      Of course that means it's unprofitable in practice/GAAP terms.

      You'd have to have a pretty big margin on inference to make up for the model development costs alone.

      A 30% margin on inference for a GPU that will last ~7 years will not cut it

  • There are profit margins on inference from what I understand. However the hefty training costs obviously make it a money losing operation.

The only way to win is commoditize your complement (IMO).

  • That's a good line but it only works if market forces don't commoditize you first. Blithely saying "commoditize your complement" is a bit like saying "draw the rest of the owl."