← Back to context

Comment by nostromo

4 days ago

They'll all do this eventually.

We're in the part of the market cycle where everyone fights for marketshare by selling dollar bills for 50 cents.

When a winner emerges they'll pull the rug out from under you and try to wall off their garden.

Anthropic just forgot that we're still in the "functioning market competition" phase of AI and not yet in the "unstoppable monopoly" phase.

"Naveen Rao, the Gen AI VP of Databricks, phrased it quite well:

all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose."

~ https://vintagedata.org/blog/posts/model-is-the-product A. Doria

> new Amp Free (10$) access is also closed up since of last night

Unstoppable monopoly will be extremely hard to pull off given the number of quality open (weights) alternatives.

I only use LLMs through OpenRouter and switch somewhat randomly between frontier models; they each have some amount of personality but I wouldn't mind much if half of them disappeared overnight, as long as the other half remained available.

  • I'm old, so I remember saying the same thing about Google and search.

    I hope you're right!

    • I think the big difference is that Google is free: everyone is using Google because it doesn’t cost anything and for a long time was the best search engine out there. I am sure that if Google would suddenly charge a few dollars per month for access, Bing market share would explode overnight, because it would become “good enough but cheaper”.

      With the AI models, using a model that is “good enough but cheaper” is already an option.

      2 replies →

    • I too am old. Google search is free, hard to replicate, and while there used to be lots of search engines, Google was (and arguably still is) miles ahead of all the others in terms of quality and performance.

      A model is hard to train but it doesn't need to be hyper up to date / have a new version come out every day. Inference is cheap (it seems?) and quality is comparable. So it's unclear how expensive offerings could win over free alternatives.

      I could be wrong of course. I don't have a crystal ball. I just don't think this is the same as Google.

      Of course I could be entirely mistaken and there could emerge a single winner

      1 reply →

    • In the first years, I remember no other search engine was close to Google quality. We all ditched AltaVista because Google was incredibly better. It would have been awful to switch back to any other options. We can already switch between the 3 big proprietary models without feeling too much differences, so it’s quite a different landscape.

      1 reply →

    • There was never a high quality alternative to Google search, until Kagi - and even that isn't free!

  • This is saying we have hundreds of open source OSes and Windows will never be a monopoly.

    Software always gets monopoly simply by usage. Every time a model gets used by esoteric use cases, it gets more training data (that a decentralized open weight model doesn't get) and it starts developing its moat.

    • >This is saying we have hundreds of open source OSes

      we don't, we have about 3 operating systems that have the decades of hardware and software compatibility that makes them widely usable. They're the most complex and complicated things we've built. LLMs are a few thousand lines of python hooked up to a power plant and graphics cards. This is the least defensible piece of software there ever has been.

    • It's more like saying AWS has a monopoly on virtual machine hosting.

      (For those unaware, AWS doesn't have a VM monopoly, and the market dynamics seem similar)

    • I think windows has historical monopoly.

      They bundled it with PC hw and the vast majority of apps only ever got published for windows, and this over decades (one would argue it’s still true).

      The starting point for LLMs is very different. Who would publish today a software that only integrates with chatGPT? Only a small minority.

      Thus I agree, I struggle to see how a monopoly can exist here. A GPU monopoly or duopoly though, perhaps.

    • > Software always gets monopoly simply by usage

      Most software isn't made by monopolies. More directly, enterprise-software stocks are getting hammered because AI offers them competition.

    • It’ll be a bunch of tiny moats in that scenario. LLMs are way too generic, adaptable, flexible in how you use it to make a big most out of it.

  • OpenRouter falls in the acceptable use category. They targeting users that are misusing their Claude OAuth token on non-Anthropic products.

  • They will [try to] ban open weights for ethics / security reasons: to stop spammers, to protect children, to stop fascism, to defend minorities. Take your pick; it won't matter why, it will only matter which media case can they thrust in the spotlight first.

    • Yes of course they will; the CEO of Anthropic makes that argument, very openly, all the time. But it will be hard to do, I think.

      1 reply →

> They'll all do this eventually

And if the frontier continues favouring centralised solutions, they'll get it. If, on the other hand, scaling asymptotes, the competition will be running locally. Just looking at how much Claude complains about me not paying for SSO-tier subscriptions to data tools when they work perfectly fine in a browser is starting to make running a slower, less-capable model locally competitive with it in some research contexts.