← Back to context

Comment by mrits

2 days ago

I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.

I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.

Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.

I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.

My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.

I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.

  • Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?

  • The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.

    You'll probably have a player that sells privacy as well.

I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.

  • With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.

    • But would they be profitable enough? They've taken on more than $50 billion of investment.

      I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.

      1 reply →

    • > if they stopped research and just focused on productionizing inference use cases I think they’d be profitable

      For a couple of years, until someone who did keep doing research pulled ahead a bit with a similarly good UI.