Comment by davedx
12 hours ago
The great thing about LLMs being more or less commoditized is switching is so easy.
I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.
It's pretty rare that switching costs are THAT low in technology!
It’s not a moat, it’s a tiny groove on the sidewalk.
I’ve experienced the same. Even guide markdown files that work well for one model or vendor will work reasonably well for the other.
Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.
FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.
Gemini CLI is open source too, though I think the consensus is it's a distant third behind Claude Code and Codex
The classic commoditize your complements.
I only integrate with models via MCP. I highly encourage everybody to do the same to preserve the commodity status
Using "low cost" and LLM's in the same sentence is kind of funny to me.
The switching cost is so low that I find it's easier and better value to have two $20/mo subscription from different providers than a $200/mo subscription with the frontier model of the month. Reliability and model diversity are a bonus.
I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?
Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.
> If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?
No. Not if it's not trained on any materials that reveal the secret sauce on why it's better.
LLM's don't possess introspection into their own training process or architecture.
That's my point. Anything that could exist that's significantly "better" would be able to share more about its creation. And anything that could be significantly better would have to be capable of "understanding" things it wasn't trained on.
1 reply →
> It's pretty rare that switching costs are THAT low in technology!
Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.
Switching between vim <-> emacs <-> IDEs is way harder than swapping a USB (unless you already know how to use them).
I don't know, USB A takes 3 attempts to plug in for some reason.
1 reply →
good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now. Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.
I dunno. Text is a pretty good de facto standard. And they work in lots of languages, not just English.
You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.
Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.
> Most people only have one mouse
Tell me you're not a Mac user without telling me you're not a Mac user...
3 replies →
I mean sublime died overnight when vscode showed up.
on some agents you just switch the model and carry on.
Except Kimi Agent via website is hard to replace - I tried the same task in Claude Code, Codex, and Kimi Agent - the results for office tasks are incomparable. The versions from Anthropic and OpenAI are far behind.