Comment by simonw
10 hours ago
The bicycle frame is a bit wonky but the pelican itself is great: https://gist.github.com/simonw/a6806ce41b4c721e240a4548ecdbe...
10 hours ago
The bicycle frame is a bit wonky but the pelican itself is great: https://gist.github.com/simonw/a6806ce41b4c721e240a4548ecdbe...
Would love to find out they're overfitting for pelican drawings.
OpenAI claims not to: https://x.com/aidan_mclau/status/1986255202132042164
Yes, Racoon on a unicycle? Magpie on a pedalo?
Correct horse battery staple:
https://claude.ai/public/artifacts/14a23d7f-8a10-4cde-89fe-0...
2 replies →
Platypus on a penny farthing.
Even if not intentionally, it is probably leaking into training sets.
The estimation I did 4 months ago:
> there are approximately 200k common nouns in English, and then we square that, we get 40 billion combinations. At one second per, that's ~1200 years, but then if we parallelize it on a supercomputer that can do 100,000 per second that would only take 3 days. Given that ChatGPT was trained on all of the Internet and every book written, I'm not sure that still seems infeasible.
https://news.ycombinator.com/item?id=45455786
How would you generate a picture of Noun + Noun in the first place in order to train the LLM with what it would look like? What's happening during that 1 estimated second?
2 replies →
But you need to also include the number of prepositions. "A pelican on a bicycle" is not at all the same as "a pelican inside a bicycle".
There are estimated to be 100 or so prepositions in English. That gets you to 4 trillion combinations.
One aspect of this is that apparently most people can't draw a bicycle much better than this: they get the elements of the frame wrong, mess up the geometry, etc.
There's a research paper from the University of Liverpool, published in 2006 where researchers asked people to draw bicycles from memory and how people overestimate their understanding of basic things. It was a very fun and short read.
It's called "The science of cycology: Failures to understand how everyday objects work" by Rebecca Lawson.
https://link.springer.com/content/pdf/10.3758/bf03195929.pdf
There’s also a great art/design project about exactly this. Gianluca Gimini asked hundreds of people to draw a bicycle from memory, and most of them got the frame, proportions, or mechanics wrong. https://www.gianlucagimini.it/portfolio-item/velocipedia/
A place I worked at used it as part of an interview question (it wasn't some pass/fail thing to get it 100% correct, and was partly a jumping off point to a different question). This was in a city where nearly everyone uses bicycles as everyday transportation. It was surprising how many supposedly mechanical-focused people who rode a bike everyday, even rode a bike to the interview, would draw a bike that would not work.
8 replies →
Absolutely. A technically correct bike is very hard to draw in SVG without going overboard in details
Its not. There are thousands of examples on the internet but good SVG sites do have monetary blocks.
https://www.freepik.com/free-photos-vectors/bicycle-svg
2 replies →
I'm not positive I could draw a technically correct bike with pen and paper (without a reference), let alone with SVG!
I just had an idea for an RLVR startup.
Yes, but obviously AGI will solve this by, _checks notes_ more TerraWatts!
The word is terawatts unless you mean earth-based watts. OK then, it's confirmed, data centers in space!
…in space!
here the animated version https://claude.ai/public/artifacts/3db12520-eaea-4769-82be-7...
That's hilarious. It's so close!
They trained for it. That's the +0.1!
Do you find that word choices like "generate" (as opposed to "create", "author", "write" etc.) influence the model's success?
Also, is it bad that I almost immediately noticed that both of the pelican's legs are on the same side of the bicycle, but I had to look up an image on Wikipedia to confirm that they shouldn't have long necks?
Also, have you tried iterating prompts on this test to see if you can get more realistic results? (How much does it help to make them look up reference images first?)
I've stuck with "Generate an SVG of a pelican riding a bicycle" because it's the same prompt I've been using for over a year now and I want results that are sort-of comparable to each other.
I think when I first tried this I iterated a few times to get to something that reliably output SVG, but honestly I didn't keep the notes I should ahve.
If we do get paperclipped, I hope it is of the "cycling pelican" variety. Thanks for your important contribution to alignment Simon!
This benchmark inspired me to have codex/claude build a DnD battlemap tool with svg's.
They got surprisingly far, but i did need to iterate a few times to have it build tools that would check for things like; dont put walls on roads or water.
What I think might be the next obstacle is self-knowledge. The new agents seem to have picked up ever more vocabulary about their context and compaction, etc.
As a next benchmark you could try having 1 agent and tell it to use a coding agent (via tmux) to build you a pelican.
This really is my favorite benchmark
There's no way they actually work on training this.
I suspect they're training on this.
I asked Opus 4.6 for a pelican riding a recumbent bicycle and got this.
https://i.imgur.com/UvlEBs8.png
It would be way way better if they were benchmaxxing this. The pelican in the image (both images) has arms. Pelicans don't have arms, and a pelican riding a bike would use it's wings.
5 replies →
Interesting that it seems better. Maybe something about adding a highly specific yet unusual qualifier focusing attention?
perhaps try a penny farthing?
There is no way they are not training on this.
I suspect they have generic SVG drawing that they focus on.
The people that work at Anthropic are aware of simonw and his test, and people aren't unthinking data-driven machines. How valid his test is or isn't, a better score on it is convincing. If it gets, say, 1,000 people to use Claude Code over Codex, how much would that be worth to Anthropic?
$200 * 1,000 = $200k/month.
I'm not saying they are, but to say that they aren't with such certainty, when money is on the line; unless you have some insider knowledge you'd like to share with the rest of the class, it seems like an questionable conclusion.
Isn't there a point at which it trains itself on these various outputs, or someone somewhere draws one and feeds it into the model so as to pass this benchmark?
Well, the clouds are upside-down, so I don't think I can give it a pass.
I'm firing all of my developers this afternoon.
Opus 6 will fire you instead for being too slow with the ideas.
Too late. You’ve already been fired by a moltbot agent from your PHB.
I suppose the pelican must be now specifically trained for, since it's a well-known benchmark.
best pelican so far would you say? Or where does it rank in the pelican benchmark?
In other words, is it a pelican or a pelican't?
You’ve been sitting on that pun just waiting for it to take flight
Except for both its legs being on the same side of the bike.
What about the Pelo2 benchmark? (the gray bird that is not gray)
do you have a gif? i need an evolving pelican gif
A pelican GIF in a Pelican(TM) MP4 container.
Pretty sure at this point they train it on pelicans
Can it draw a different bird on a bike?
Here's a kākāpō riding a bicycle instead: https://gist.github.com/simonw/19574e1c6c61fc2456ee413a24528...
I don't think it quite captures their majesty: https://en.wikipedia.org/wiki/K%C4%81k%C4%81p%C5%8D
Now that I've looked it all up, I feel like that's much more accurate to a real kākāpō than the pelican is to a real pelican. It's almost as if it thinks a pelican is just a white flamingo with a different beak.
The ears on top are a cute touch
[dead]
[flagged]
I'll bite. The benchmark is actually pretty good. It shows in an extremely comprehensible way how far LLMs have come. Someone not in the know has a hard time understanding what 65.4% means on "Terminal-Bench 2.0". Comparing some crappy pelicans on bicycles is a lot easier.
it ceases to be a useful benchmark of general ability when you post it publicly for them to train against
the field is advancing so fast it's hard to do real science as their will be a new SOTA by the time you're ready to publish results. i think this is a combination of that and people having a laugh.
Would you mind sharing which benchmarks you think are useful measures for multimodal reasoning?
A benchmark only tests what the benchmark is doing, the goal is to make that task correlate with actually valuable things. Graphic benchmarks is a good example, extremely hard to know what you will get in a game by looking at 3D Mark scores, it varies by a lot. Making a SVG of a single thing doesn’t help much unless that applies to all SVG tasks.
[flagged]
Personal attacks are not allowed on HN. No more of this, please.