Comment by rsanek
2 days ago
Looks to be the ~same intelligence as gpt-oss-120B, but about 10x slower and 3x more expensive?
https://artificialanalysis.ai/models/deepseek-v3-1-reasoning
2 days ago
Looks to be the ~same intelligence as gpt-oss-120B, but about 10x slower and 3x more expensive?
https://artificialanalysis.ai/models/deepseek-v3-1-reasoning
Other benchmark aggregates are less favorable to GPT-OSS-120B: https://arxiv.org/abs/2508.12461
With all these things, it depends on your own eval suite. gpt-oss-120b works as well as o4-mini over my evals, which means I can run it via OpenRouter on Cerebras where it's SO DAMN FAST and like 1/5th the price of o4-mini.
How would you compare gpt-oss-120b to (for coding):
Qwen3-Coder-480B-A35B-Instruct
GLM4.5 Air
Kimi K2
DeepSeek V3 0324 / R1 0528
GPT-5 Mini
Thanks for any feedback!
3 replies →
My experience is that gpt-oss doesn't know much about obscure topics, so if you're using it for anything except puzzles or coding in popular languages, it won't do well as the bigger models.
It's knowledge seems to be lacking even compared to gpt3.
No idea how you'd benchmark this though.
> My experience is that gpt-oss doesn't know much about obscure topics
That is the point of these small models. Remove the bloat of obscure information (address that with RAG), leaving behind a core “reasoning” skeleton.
Yeah I guess. Just wanted to say the size difference might be accounted for by the model knowing more.
Seems more user-friendly to bake it in.
Something I was doing informally that seems very effective is asking for details about smaller cities and towns and lesser points of interest around the world. Bigger models tend to have a much better understanding and knowledge base for the more obscure places.
I would really love if they figured out how to train a model that doesn't have any such knowledge baked it, but knows where to look for it. Maybe even has a clever database for that. Knowing this kind of trivia like this consistently of the top of your head is a sign of deranged mind, artificial or not.
2 replies →
I don't think you're necessarily wrong, but your source is currently only showing a single provider. Comparing:
https://openrouter.ai/openai/gpt-oss-120b and https://openrouter.ai/deepseek/deepseek-chat-v3.1 for the same providers is probably better, although gpt-oss-120b has been around long enough to have more providers, and presumably for hosters to get comfortable with it / optimize hosting of it.
> same intelligence as gpt-oss-120B
Let's hope not, because gpt-oss-120B can be dramatically moronical. I am guessing the MoE contains some very dumb subnets.
Benchmarks can be a starting point, but you really have to see how the results work for you.
Clearly, this is a dark harbinger for Chinese AI supremacy /s