Comment by petesergeant
2 days ago
With all these things, it depends on your own eval suite. gpt-oss-120b works as well as o4-mini over my evals, which means I can run it via OpenRouter on Cerebras where it's SO DAMN FAST and like 1/5th the price of o4-mini.
How would you compare gpt-oss-120b to (for coding):
Qwen3-Coder-480B-A35B-Instruct
GLM4.5 Air
Kimi K2
DeepSeek V3 0324 / R1 0528
GPT-5 Mini
Thanks for any feedback!
I’m afraid I don’t use any of those for coding
You're missing out. GLM 4.5 Air and Qwen3 A3B both blow OSS 120B out of the water in my experience.
1 reply →