Comment by someotherperson
5 days ago
Competition is bad? Who cares - let the big players subsidize and compete between each other. That's what we want. We want strong models at a low price, and we'll hype up whoever is doing it.
Simultaneously, we also hype up the open models that are catching up. That are significantly more discounted, that also put pressure on the big players and keep them in check.
People aren't falling for PR; people are encouraging the PR to put pressure on the competition. It's not that hard.
Interesting to see your observation where I have observed the opposite: posts that share big news about open-weight local models have many upvoted comments arguing local models shouldn’t be taken seriously and promoting the SOTA commercial models as the only viable options for serious developers.
Here and on AI tech subreddits (ones that aren’t specifically about local or FOSS) seem to have this dynamic, to the degree I’ve suspected astroturfing.
So it’s refreshing to see maybe that’s just a coincidence or confirmation bias on my end.
Both can be true at the same time. I currently wouldn't waste my time with open models for almost all use cases, but they're crucial from a data privacy and competitive perspective, and I can't wait for them to catch up enough to be as useful as the current frontier models.
I've found qwen3 to be very usable on my local machine (a Framework Desktop with 128gb RAM). I doubt it could handle the complex tasks I throw at Claude Opus at work, but it's more than capable of doing a surprising number of tasks, with good performance.
9 replies →
Local isn't viable yet on an economic basis, API costs are so low that you're better off taking advantage of the bonanza. As local models become more performant, so too will the ability of providers via Openrouter be able to offer them cheaper than your likely payoff period for a $4K Mac Studio 128GB. e.g Gemma 4 31B is impressive, but it costs practically nothing via Openrouter. Given that there are a ton of providers for open models, I doubt there's any subsidy going on because the providers are faceless and interchangeable.
At least, that's my theory.
The big advantages of local on a business level are:
- Freezing your model's exact settings once you've locked in some kind of workflow that works just fine. - Guarding against insane token usage from LLMs who have been told to never stop until they figure out the solution OR setting up an LLM run incorrectly. (The last one happened to me with Gemini 3.1 Pro) - PII or some need for on-premise only LLMs.
I'm just waiting till I can afford a GPU again
> Interesting to see your observation where I have observed the opposite: posts that share big news about open-weight local models have many upvoted comments arguing local models shouldn’t be taken seriously and promoting the SOTA commercial models as the only viable options for serious developers.
When I argue this, my point is that FOSS shouldn't target the desktop with open weights - it should target H200s. Really big parameter models with big VRAM requirements.
Those can always be distilled down, but you can't really go the other way.
I've invested significant time into getting open models to work, and investigating what works well.
The TL;DR is that unless you are doing it as a hobby or working in an environment where none of the data privacy options supported by Anthropic/OpenAI (including running on Azure/Bedrock with ZDR) work for you then it's not worth it.
The best open models are around the Sonnet 4.6 level. That's excellent, but the level of tasks you can give to GPT 5.4 or Opus 4.6 is just so much higher it doesn't compare (and Opus 4.7 seems noticeably better in my few hours of testing too).
I have my own benchmarks, but I like this much under-publicized OpenHands page: https://index.openhands.dev/home
It shows for every task they test closed models do the best. The closest and open model gets is Minmax 2.7 on issue resolution where it's ~1% worse than the leaders.
That matches my experience - fine for small problems, but well behind has the task gets bigger.
I agree but I’d like to add that people are definitely falling for PR, people are always falling for PR or no one would bother with PR
This assumes people are in touch with reality and aren't just motivated by vibes and insta-reactions on social media
> Competition is bad? Who cares - let the big players subsidize and compete between each other.
Subsidizing is the opposite of competing. It's literally the practice of underpricing your product to box out competition. If everyone was competing on a level playing field they would all price their products above cost.
All these tech oligarch asshat companies need to be regulated to hell and back.
The moat was already too large for smaller players. Let them subsidize. Take from investors and give to us buying me time to beef up my local stack to run local models.
For many things now you need to go local and in the future if you want any privacy you'll need to go local.
Excellent point, but I still think the oligarchs have gotten a little monopoly-happy.
What's the alternative, move to North Korea ?
Well, that's a great big wtf out of left field.
1 reply →
It's hilarious how much this post reads as drafted by an LLM. The emdash, "it's not X, it's Y" framing, incredible.
People use em-dashes all the time. This is why LLMs use it too. Also guess how LLMs learnt to use "it's not X, it's Y".
I wrote my post myself.
Dogfooding by the slop factory. The artificial centipede.
Big players subsidizing is what kills medium and small players which then kills competition. What follows is monopoly.
Big players operating at loss to distort the market is not a good thing overall.
The medium and small players are literally just distilling the larger models.
It's not the smaller players spending billions on training data.
No, the medium and small players are the Mistals, DeepSeek and H Company of the world, with their own models using quirky optimisation techniques to be able to compete.