← Back to context

Comment by le-mark

18 hours ago

Google will definitely lose. Llms supplants search. But not the old document search which they stopped doing long ago.

Add in the fact that open weight models are 6-12 months behind frontier models means AI companies aren’t building a moat, they’re on a treadmill. And treadmills don’t justify the valuations OR the hype.

AI companies are in trouble.

I see one profitable enterprise for AI that involves spying on everyone, managing their lives (or otherwise) tightly, automating foreign conquests and needing to make only the top decisions while delegating everything else, like a king. I can see a group or one could say a class of people that would happily invest in such future.

  • Exactly. I keep saying, AI is not useful to us. There will be no AI companies.

    Even this supposed profitable enterprise, the people involved are absolutely too moronic to be able to control the thing they try to invent, it will just be a matter of time before it turns around and eliminates them as well...

Not all AI companies are the same.

Some are piling on masses of debt to built capacity (eg. Oracle). Others are just reinvesting the profits from the rest of their company (eg. Google, Meta).

Anthropic’s moat is their best tool, Claude Code.

OpenAI’s moat is the brand of ChatGPT, once the fastest growing app in the history of the world.

It’s possible that open weight models keep pace, but it’s also possible that the investment to train them becomes prohibitively expensive and open weight models cease to keep pace with the large foundation model companies.

  • I really don't think open models will lose. I think they are cheaper to train because they have to be more efficient than the monstrosities we have now.

    There is no theory that says the current frontier models cannot exist in models with 1/100th the compute waste ;). When we start trending in that direction, and oh wow we truly are, there will be no reason for these services. You could run them on your own hardware without serious investments.

    The moat openai and anthropic have is them among others have attempted to buy all of the computer hardware for the next two years. That's intentional. They know the only existential threat to them is anyone coming up with a way to do this better than them. It's already happened and it's going to become more and more divergent.

  • Open weight models will keep pace because capable open-weight models are China's strategy for preventing a closed takeover of AI by the West.

    • US megatechs stole copyrighted data to train their. Hyper expensive models.

      Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.

      I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.

One of the double edge swords I see is devs/evangelists pushing agentic coding are playing the 'good enough' statement. If that is true and those asking for software can live with good enough AI code, the moment the free local models hit that level the party is over in the continual push to the premium tip of the spear models.

  • We might already be there. I've been running Qwen-3.6-27B with 8-bit quantization locally with llama.cpp (~100k context window), and to be honest for my use case, 40-50% of the time it is more usable than claude-code. I only have the $20/mo plan, so I often hit rate limits after 2-3 prompts. And while the local model is slower, it just keeps chugging, is practically free, and more often than not produces code similar to claude. I wouldn't be surprised if in 6-12 months we have local models which are comparable to opus 4.6...which I personally consider as a tipping point where agentic coding became practical.

What does their patent moat look like?

  • Google owns the core transformer patent(s), for one thing, e.g. https://patents.google.com/patent/US10452978B2/en.

    I haven't read the claims, so I don't know how easy it will be to work around them. This particular one seems to cover encoder-decoder networks, so it's not necessarily applicable to later LLM implementations. But I'd be amazed if Google didn't have several other relevant patents in their arsenal.