← Back to context

Comment by esafak

2 days ago

The terminal bench scores look weak but nice otherwise. I hope once the benchmarks are saturated, companies can focus on shrinking the models. Until then, let the games continue.

Shrinking and speed; speed is a major thing. Claude Code is just too slow, very good but it has no reasonable way to handle simple requests because of the overhead, so then everything should just be faster. If I were Anthropic, I would've bought Groq or Cerebras by now. Not sure if they (or the other big ones) are working on similar inference hardware to provide 2000tok/s or more.

  • Z.ai (at least mid/top end subscription not sure about the API) is pretty slow too especially during some periods. Cerebras of course is probably a different story (if its not quantitized)

z.ai models are crazy cheap. The one year lite plan is like 30€ (on sale though).

Complete no-brainer to get it as a backup with Crush. I've been using it for read-only analysis and implementing already planned tasks with pretty good results. It has a slight habit of expanding scope without being asked. Sometimes it's a good thing, sometimes it does useless work or messes things up a bit.

  • I tried several times . It is no match in my personal experience with Claude models . There’s almost no place for second spot from my point of view . You are doing things for work each bug is hours of work, potentially lost customer etc . Why would you trust your money … just to back up ?

    • It's a perfectly serviceable fallback when Claude Code kicks me off in the middle of an edit on the Pro plan (which happens constantly to me now) and I just want to finish tweaking some CSS styles or whatever to wrap up. If you have a legitimate concern about losing customers than yes, you're probably in the wrong target market for a $3/mo plan...

      4 replies →

    • I'm using it for my own stuff and I'm definitely not dropping however much it costs for the Claude Max plans.

      That's why I usually use Claude for planning, feed the issues to beads or a markdown file and then have Codex or Crush+GLM implement them.

      For exploratory stuff I'm "pair-programming" with Claude.

      At work we have all the toys, but I'm not putting my own code through them =)

      2 replies →

    • GLM 4.6 was kind of meh. Especially on Claude code since thinking was seemingly entirely broken. This week I've been playing with 4.7 and it seems like massive improvement, subjective pretty much almost at Sonnet level (it's still using a lot less thinking tokens, though).

  • I shifted from Crush to Opencode this week because Crush doesn't seem to be evolving in its utility; having a plan mode, subagents etc seems to not be a thing they're working on at the mo.

    I'd love to hear your insight though, because maybe I just configured things wrong haha

    • I can't understand why every CLI tool doesn't have Plan mode already, it should be table stakes to make sure I can just ask questions or have a model do code reviews without having to worry about it rushing into implementation headlong.

      Looking at you, Gemini CLI.

  • this doesn’t mean much if you hit daily limits quickly anyway. So the API pricing matters more

    • TBH when I hit the Claude daily limit I just take that as a sign to go outside (or go to bed, depending on the time).

      If the project management is on point, it really doesn't matter. Unfinished tasks stay as is, if something is unfinished in the context I leave the terminal open and come back some time later, type "continue", hit enter and go away.

We're not gonna see significant model shrinkage until the money tap dries up. Between now and then, we'll see new benchmarks/evals that push the holes in model capabilities in cycles as they saturate each new round.

  • isn't gemini 3 flash already model shrinkage that does well in coding?

    • Xiaomi, Nvidia Nemotron, Minimax, lots of other smaller ones too. There are massive economic incentives to shrink models because they can be provided faster and at lower cost.

      I think even with the money going in, there has to be some revenue supporting that development somewhere. And users are now looking at the cost. I have been using Anthropic Max for most of this year after checking out some of these other models, it is clearly overpriced (I would also say their moat of Claude Code has been breached). And Anthropic's API pricing is completely crazy when you use some of the paradigms that they suggest (agents/commands/etc) i.e. token usage is going up so efficient models are driving growth.

  • > We're not gonna see significant model shrinkage until the money tap dries up.

    I'm not sure about that. Microsoft has been doing great work on "1-bit" LLMs, and dropping the memory requirements would significantly cut down on operating costs for the frontier players.

It's a good model, for what it is. Z.ai's big business prop is that you can get Claude Code with their GLM models at much lower prices than what Anthropic charges. This model is going to be great for that agentic coding application.

  • … and wake up every night because you saved a few dollars , there are bugs and they are due to this decision?

    • I pay for both Claude and Z.ai right now, and GLM-4.7 is more than capable for what I need. Opus 4.5 is nice but not worth the quota cost for most tasks.

    • well I feel like all models are converging and maybe claude is good but only time will tell as gemini flash and GLM put pressure on claude/anthropic models

      People (here) are definitely comparing it to sonnet so if you take this stance of saving a few dollars, I am sure that you must be having the same opinion of using opus model and nobody should use sonnet too

      Personally I am interested in open source models because they would be something which would have genuine value and competition after the bubble bursts