Comment by chad1n
6 days ago
The guys in the other thread who said that OpenAI might have quantized o3 and that's how they reduced the price might be right. This o3-pro might be the actual o3-preview from the beginning and the o3 might be just a quantized version. I wish someone benchmarks all of these models to check for drops in quality.
That's definitely not the case here. The new o3-pro is slow - it took two minutes just to draw me an SVG of a pelican riding a bicycle. o3-preview was much faster than that.
https://simonwillison.net/2025/Jun/10/o3-pro/
Do you think a cycling pelican is still a valid cursory benchmark? By now surely discussions about it are in the training set.
There is quite a few on Google Image search.
On the other hand they still seem to struggle!
Wow! pelican benchmark is now saturated
Not until I can count the feathers, ask for a front view of the same pelican, then ask for it to be animated, all still using SVG.
I wonder how much of that is because it's getting more and more included in training data.
We now need to start using walrusses riding rickshaws
Would you say this is the best cycling pelican to date? I don't remember any of the others looking better than this.
Of course by now it'll be in-distribution. Time for a new benchmark...
I love that we are in the timeline where we are somewhat seriously evaluating probably super human intelligence by their ability to draw a svg of a cycling pelican.
6 replies →
I like the Gemini 2.5 Pro ones a little more: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-...
That's one good looking pelican
This made me think of the 'draw a bike experiment', where people were asked to draw a bike from memory, and were suprisingly bad at recreating how the parts fit together in a sensible manner:
https://road.cc/content/blog/90885-science-cycology-can-you-...
ChatGPT seems to perform better than most, but with notable missing elements (where's the chain or the handlebars?). I'm not sure if those are due to a lack of understanding, or artistic liberties taken by the model?
Not distilled, same model. https://x.com/therealadamg/status/1932534244774957121?s=46&t...
Well, that might be more of a function of how long they let it 'reason' than anything intrinsic to the model?
> It's only available via the newer Responses API
And in ChatGPT Pro.
I've wondered if some kind of smart pruning is possible during evaluation.
What I mean by that, is if a neuron implements a sigmoid function and its input weights are 10,1,2,3 that means if the first input is active, then evaluation the other ones is mathematically pointless, since it doesn't change the result, which recursively means the inputs of those neurons that contribute to the precursors are pointless as well.
I have no idea how feasible or practical is it to implement such an optimization and full network scale, but I think its interesting to think about
o3-pro is not the same as the o3-preview that was shown in Dec '24. OpenAI confirmed this for us. More on that here: https://x.com/arcprize/status/1932535380865347585
Is there a way to figure out likely quantization from the output. I mean, does quantization degrade output quality in certain ways that are different from other modification of other model properties (e.g. size or distillation)?
What a great future we are building. If AI is supposed to run everything, everywhere....then there will be 2, maybe 3, AI companies. And nobody outside those companies knows how they work.
What makes you think so? So far, many new AI companies are sprouting and many of them seem to be able to roughly match the state-of-the-art very quickly. (But pushing the frontier seems to be harder.)
From the evidence we have so far, it does not look like there's any natural monopoly (or even natural oligopoly) in AI companies. Just the opposite. Especially with open weight models, or oven more so complete open source models.
> And nobody outside those companies knows how they work.
I think you meant to say:
And nobody knows how they work.