Comment by FlyingSnake
14 hours ago
I tried GLM5.1 last week after reading about it here. It was slow as molasses for routine tasks and I had to switch back to Claude. It also ran out of 5H credit limit faster than Claude.
14 hours ago
I tried GLM5.1 last week after reading about it here. It was slow as molasses for routine tasks and I had to switch back to Claude. It also ran out of 5H credit limit faster than Claude.
If you view the "thinking" traces you can see why; it will go back and forth on potential solutions, writing full implementations in the thinking block then debating them, constantly circling back to points it raised earlier, and starting every other paragraph with "Actually…" or "But wait!"
I see this with Opus too.
Indeed. And that’s with Anthropic hiding reading traces unlike these other comparisons.
> "Actually…" or "But wait!"
You’re absolutely right!
Jokes apart, I did notice GLM doing these back and forth loops.
I was watching Qwen3.6-35B-A3B (locally) doing the same dance yesterday. It eventually finished and had a reasonable answer, but it sure went back and forth on a bunch of things I had explicitly said not to do before coming to a conclusion. At least said conclusion was not any of the things I'd said not to do.
2 replies →
Z.ai’s cloud offering is poor, try it with a different provider.
could you add some context for why you think it's poor?