GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models [pdf]

9 hours ago (arxiv.org)

Really appreciate the depth of this paper; it's a welcome change from the usual model announcement blog posts. The Zhipu/Tsinghua team laid out not just the 'what' but the 'how,' which is where the most interesting details are for anyone trying to build with or on top of these models.

The post-training methodology (Sec 3) is what really stands out to me. The idea of creating specialized 'expert models' for reasoning, agents, and chat, and then distilling their capabilities into a final unified model is a fascinating approach. It feels like a more structured way to solve the "jack of all trades, master of none" problem that can plague generalist models. Instead of just mixing all the data, they're essentially having a generalist learn from a committee of specialists.

A couple of the findings from their RL experiments are pure gold for anyone working in this space. The counter-intuitive result that a single-stage RL process at the full 64K context length outperforms a progressive, multi-stage approach (Fig 6) is a fantastic lesson. I've seen teams assume the opposite would be true. Also, the pragmatic choice to use an XML-like template for function calls to avoid JSON escaping hell (Fig 4) may be a small but brilliant engineering decision that makes a huge difference in practice. Wrangling escaped code inside JSON turns out to be a mess.

The performance on SWE-bench is impressive, putting it in the same league as much larger or proprietary models. What I’d love to see, and maybe others here have thoughts, is whether this hybrid training recipe holds up outside ARC-style evals. For example, do the agentic improvements transfer to messier, real-world workflows where APIs are undocumented, partial failures are common, and user input is full of ambiguity?

  • Are all these "post/mid-training tweaks" important if you have a specific domain with abundant/verified/synthesis data and labels?

    Can a small team working on ASI/domain-specific stick to scaling 2024-era best practices training stack? Or will they miss massive improvements?

I've been playing around with GLM-4.5 as a coding model for a while now and it's really, really good. In the coding agent I've been working on, Octofriend [1], I've sometimes had it on and confused it for Claude 4. Subjectively, my experience has been:

1. Claude is somewhat better at whole-codebase tasks, where you need to reason over a bunch of context and consider system interactions.

2. GLM-4.5 is somewhat better at being "honest" — i.e. I rarely see it doing the things Claude does like making broken tests pass by changing the test instead of fixing the bug.

Both are quite good though, and GLM-4.5 has found bugs that both Claude 4 Sonnet and 4.1 Opus have failed to catch. In general I think Claude wins a little more frequently on debugging tasks than GLM-4.5, but it's close.

Compared to GPT-5, both Claude and GLM feel like they're more consistent, although GPT-5 sometimes has long brilliant runs where it nails everything with subjectively higher code quality than either of the latter. However, once GPT-5 goes off the rails, it's hard to get it back on track, so it can be a bit frustrating to work with in comparison.

1: https://github.com/synthetic-lab/octofriend

  • I just read your comment and decided to give GLM-4.5 a try in Kilocode. I'd been using Gemini CLI all day to try to resolve a tricky bug in some compiler code (a compiler for a subset of C that generates microcode for... a weird architecture, I'll leave it at that). So GLM-4.5 zoomed in on the problem right away. A problem that's eluded Gemini CLI all day. Gemini was leading me on a wild goose chase implicating a function that turns out wasn't the problem (and trying to make all kinds of lame changes to the function saying that would fix the problem - and it never did because the problem wasn't that function).

    • Sometimes getting a second pair of eyes to look at the problem helps and is usually not a judgement of smartness of the first pair of eyes. Seems like it also applies to coding agents.

      2 replies →

    • I am curious about your setup? Is it just gemini cli? Or are you combining it with other frameworks?

    • Gemini CLI uses whole file edit format and goes through the token very fast. I use aider for this reason with diff fenced, it burns very less tokens.

  • About your first point, I also feel like Claude is better if there’s more in the context where 4.5 is getting "worse".

  • I've been using architect mode in aider

    Deepseek R1 (does high level planning) combined with Qwen3 480B (does low level coding) or whatever is available from qwen code apis.

    It's working great.

    It solves 99.99% problem on tis own.

    The seperation isn't very good in aider so i later plan to make my own tool to achieve better workflow.

  • I've had similarly good experiences with GLM-4.5 for smaller projects/requests. Unfortunately that did degrade with larger contexts, so I'm still treating it as a good fallback for Sonnet 4, rather than a full-blown replacement.

  • How are you using glm-4.5? Are you consuming the api or running something like glm-4.5 air locally?

    • I run a privacy-focused inference company, Synthetic [1], and I use our API of course :P I actually like GLM-4.5 enough that it's currently our default recommended model for new users. But yes, otherwise I'd use the official zai API most likely, or Fireworks. GLM-4.5-Air is quite good for a local model but GLM-4.5 is better; up to you if the tradeoff is worth it — there's definitely value in the data not ever leaving your machine, but it's not going to be as strong of a model.

      1: https://synthetic.new

      7 replies →

    • Not OP. Chutes.ai charges $0.20 per 1M tokens. I don’t think it uses caching though because I ended up burning $30 in an hour or two. I had to move back to Claude Code.

      1 reply →

Seems like we may get local, open, workstation-grade models that are useful for coding in a few years. By workstation-grade I mean a computer around 2000 USD, and by useful for coding I mean around Sonnet 4 level. Current cloud based models are fun and useful, but a tool that is / will be so core to the developer experience, I want to be able to run locally.

  • This will be essential for the open source. Otherwise open source development will become unsustainable. I'm actually a little bit more optimistic. I think we will get something more than Sonnet 4 level in two years, that can be run on a $2,000 machine.

The sheer number of things "they observed" in this paper that could be whole papers in themselves is astounding! Lots of great stuff in here around training processes and data collection+synthesis.

Does anyone have any background information on the authors? Have they published similarly impressive works in the past?

This feels like the first open model that doesn’t require significant caveats when comparing to frontier proprietary models. The parameter efficiency alone suggests some genuine innovations in training methodology. I am keen to see some independent verification of the results and to see how if does on Aider’s LLM Leaderboard.

Fantastic release, and it's under the Apache license too. I'm so happy that we've got open source models pushing the envelope.

This is a great model for software development - probably the best of the freely available ones.

  • Yep I think it's the best, period. Qwen3-coder perhaps took the limelight but the GLM models perform and behave better in agentic loops. I cannot believe they had gone from a 32B frontend focused GLM-4 to these beasts that can challenge Claude, in a matter of months.

It's ok, somewhere between a qwen 2.5 VL and the frontier models (o3 / opus 4) on visual reasoning

Huge respect to the open-source culture in China. The Chinese are really leading the world in democratizing AI.