Comment by simonw
21 hours ago
It might not be 100% clear from the writing but this benchmark is mainly intended as a joke - I built a talk around it because it's a great way to make the last six months of model releases a lot more entertaining.
I've been considering an expanded version of this where each model outputs ten images, then a vision model helps pick the "best" of those to represent that model in a further competition with other models.
(Then I would also expand the judging panel to three vision LLMs from different model families which vote on each round... partly because it will be interesting to track cases where the judges disagree.)
I'm not sure if it's worth me doing that though since the whole "benchmark" is pretty silly. I'm on the fence.
I'd say definitely do not do that. That would make the benchmark look more serious while still being problematic for knowledge cutoff reasons. Your prompt has become popular even outside your blog, so the odds of some SVG pelicans on bicycles making it into the training data have been going up and up.
Karpathy used it as an example in a recent interview: https://www.msn.com/en-in/health/other/ai-expert-asks-grok-3...
Yeah, this is the problem with benchmarks where the questions/problems are public. They're valuable for some months, until it bleeds into the training set. I'm certain a lot of the "improvements" we're seeing are just benchmarks leaking into the training set.
That’s ok, once bicycle “riding” pelicans become normative, we can ask it for images of pelicans humping bicycles.
The number of subject-verb-objects are near infinite. All are imaginable, but most are not plausible. A plausibility machine (LLM) will struggle with the implausible, until it can abstract well.
3 replies →
I’d say it doesn’t really matter. There is no universally good benchmark and really they should only be used to answer very specific questions which may or may not be relevant to you.
Also, as the old saying goes, the only thing worse than using benchmarks is not using benchmarks.
I would definitely say he had no intention of doing that and was doubling down on the original joke.
The road to hell is paved with the best intentions
clarification: I enjoyed the pelican on a bike and don't think it's that bad =p
Yeah, Simon needs to release a new benchmark under a pen name, like Stephen King did with Richard Bachman.
Even if it is a joke, having a consistent methodology is useful. I did it for about a year with my own private benchmark of reasoning type questions that I always applied to each new open model that came out. Run it once and you get a random sample of performance. Got unlucky, or got lucky? So what. That's the experimental protocol. Running things a bunch of times and cherry picking the best ones adds human bias, and complicates the steps.
It wasn't until I put these slides together that I realized quite how well my joke benchmark correlates with actual model performance - the "better" models genuinely do appear to draw better pelicans and I don't really understand why!
How did the pelicans of point releases of V3 and of R1 (R1-0528) do compared to the original versions of the models?
LLMs also have a 'g factor' https://www.sciencedirect.com/science/article/pii/S016028962...
Well, the most likely single random sample would be a “representative” one :)
until they start targeting this benchmark
2 replies →
I imagine the straightforward reason is that the “better” models are in fact significantly smarter in some tangible way, somehow.
I just don't get the fuss from the pro-LLM people who don't want anyone to shame their LLMs...
people expect LLMs to say "correct" stuff on the first attempt, not 10000 attempts.
Yet, these people are perfectly OK with cherry-picked success stories on youtube + advertisements, while being extremely vehement about this simple experiment...
...well maybe these people rode the LLM hype-train too early, and are desperate to defend LLMs lest their investment go poof?
obligatory hype-graph classic: https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Ga...
Joke or not, it still correlates much better with my own subjective experiences of the models than LM Arena!
Very nice talk, acceptable by general public and by AI agent as well.
Any concerns about open source “AI celebrity talks” like yours can be used in contexts that would allow LLM models to optimize their market share in ways that we can’t imagine yet?
Your talk might influence the funding of AI startups.
#butterflyEffect
I welcome a VC funded pelican … anything! Clippy 2.0 maybe?
Simon, hope you are comfortable in your new role of AI Celebrity.