Comment by svcrunch
1 month ago
The grandparent is definitely wrong on (3). Yes, coding is a killer product, I agree with you.
On (2), I agree with you for local models. BUT, there are also the open source Chinese models accessible via open-router. Your argument ("don't hold a candle to SOTA models") does not hold if the comparison is between those.
On (1), I agree more with the grandparent than with your assessment. Yes, OpenAI and Anthropic are killing it for now, but the time horizon is very short. I use codex and claude daily, but it's also clear to me that open source is catching up quickly, both w.r.t. the models and the agentic harnesses.
>BUT, there are also the open source Chinese models accessible via open-router.
I thought so myself, but after burning a lot of money on OpenRouter in a few days I just subscribed to Z.ai's Coding Pro plan and using the subscription is much, much friendlier with my wallet.
Open models are good but if you need a $10k GPU to run them then 99% of people are better of subscribing to OAI or CC.
Nowadays I also feel model performance matters less than the design of the tool harness, inference speed, and the other systems that surround a typical coding model.
> the open source Chinese models accessible via open-router
And? They aren't as good as SOTA models. Even the SOTA model provider's small models aren't worth using for many of my coding tasks.
In my limited experience with it, GLM 5.1 is on par with Opus 4.6.
I used GLM5 quite a bit, and I'd say it was maybe on par with Sonnet for most simple to medium tasks. Definitely not Opus though. Didn't test super long context tasks, and that's where I would expect it to break down. A recent study on software maintainability still showed Sonnet and Opus were peerless on that metric, although GLM series of models has been making impressive gains.