Comment by throwaw12

1 day ago

its an SKU from OpenAI's perspective, broader goal and vision is (was) different. Look at the Claude and GLM, both were 95% committed to dev tooling: best coding models, coding harness, even their cowork is built on top of claude code

I'm not sure how this makes sense when Claude models aren't even coding specific: Haiku, Sonnet, Opus are the exact same models you'd use for chat or (with the recent Mythos) bleeding edge research.

  • Anthropic models and training data is optimized for coding use cases, this is the difference.

    OpenAI on the other hand has different models optimized for coding, GPT-x-codex, Anthropic doesnt have this distinction

    • But they detect it under the hood and apply a similar "variant", as API results are not the same than on Claude Code (that was documented before by someone).