Comment by WhitneyLand
6 days ago
So, we currently have o4-mini and o4-mini-high, which represent medium and high usage of “thinking” or use of reasoning tokens.
This announcement adds o3-pro, which pairs with o3 in the same way the o4 models go together.
It should be called o3-high, but to align with the $200 pro membership it’s called pro instead.
That said o3 is already an incredibly powerful model. I prefer it over the new Anthropic 4 models and Gemini 2.5. It’s raw power seems similar to those others, but it’s so good at inline tool use it usually comes out ahead overall.
Any non-trivial code generation/editing should be using an advanced reasoning model, or else you’re losing time fixing more glitches or missing out on better quality solutions.
Of course the caveat is cost, but there’s value on the frontier.
No, this doesn't seem to be correct, although confusion regarding model names is understandable.
o4-mini-high is the label on chatgpt.com for what in the API is called o4-mini with reasoning={"effort": "high"}. Whereas o4-mini on chatgpt.com is the same thing as reasoning={"effort": "medium"} in the API.
o3 can also be run via the API with reasoning={"effort": "high"}.
o3-pro is different than o3 with high reasoning. It has a separate endpoint, and it runs for much longer.
See https://platform.openai.com/docs/guides/reasoning?api-mode=r...
OpenAI started strong in the naming department (ChatGPT, DALL-E) then fell off so hard since.
It's arguable ChatGPT is not such a great name, either. The general public has no idea what GPT means and will often swap the letters around. It does benefit from being unique, however.