← Back to context

Comment by simonw

1 day ago

It wasn't until I put these slides together that I realized quite how well my joke benchmark correlates with actual model performance - the "better" models genuinely do appear to draw better pelicans and I don't really understand why!

How did the pelicans of point releases of V3 and of R1 (R1-0528) do compared to the original versions of the models?

Well, the most likely single random sample would be a “representative” one :)

until they start targeting this benchmark

  • Right, that was the closing joke for the talk.

    • It is funny to think that a hundred years in the future there may be some vestigial area of the models’ networks that’s still tuned to drawing pelicans on bicycles.

I imagine the straightforward reason is that the “better” models are in fact significantly smarter in some tangible way, somehow.

I just don't get the fuss from the pro-LLM people who don't want anyone to shame their LLMs...

people expect LLMs to say "correct" stuff on the first attempt, not 10000 attempts.

Yet, these people are perfectly OK with cherry-picked success stories on youtube + advertisements, while being extremely vehement about this simple experiment...

...well maybe these people rode the LLM hype-train too early, and are desperate to defend LLMs lest their investment go poof?

obligatory hype-graph classic: https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Ga...