← Back to context

Comment by rurban

6 days ago

We did run it locally on a free H100, and it performed awfully. With vLLM and opencode. Now we are running gpt-oss-120b which is better, but still far behind opus 4.6, the only coding model which is better than our most experienced senior dev. gpt-5.3-codex is more like on the sonnet level on complicated C code. Bearable, but still many stupidities. gpt-oss is hilariously stupid, but might work for typescript, react, python simple tasks.

For vision qwen is the best, our goto vision model.