Comment by AJRF
1 month ago
Iman Mirzadeh on Machine Learning Street Talk (Great podcast if you haven’t already listened!) put into a words a thought I had - LLM labs are so focused on making those scores go up it’s becoming a bit of a perverse incentive.
If your headline metric is a score, and you constantly test on that score, it becomes very tempting to do anything that makes that score go up - i.e Train on the Test set.
I believe all the major ML labs are doing this now because:
- No one talks about their data set
- The scores are front and center of big releases, but there is very little discussion or nuance other than the metric.
- The repercussions of not having a higher or comparable score is massive failure and your budget will get cut.
More in depth discussion on capabilities - while harder - is a good signal of a release.
> LLM labs are so focused on making those scores go up it’s becoming a bit of a perverse incentive.
This seems like an odd comment to post in response to this article.
This is about showing that a new architecture can match the results of more established architectures in a more efficient way. The benchmarks are there to show this. Of course they aren’t going to say “It’s just as good – trust us!”.
He's not advocating for "trust us", he's advocating for more information than just the benchmarks.
Unfortunately, I'm not sure what a solution that can't be gamed may even look like (which is what gp is asking for).
The best thing would be blind preference tests for a wide variety of problems across domains but unfortunately even these can be gamed if desired. The upside is that they are gamed by being explicitly malicious which I'd imagine would result in whistleblowing at some point. However Claude's position on leaderboards outside of webdev arena makes me skeptical.
My objection is not towards “advocating for more information”, my objection is towards “so focused on making those scores go up it’s becoming a bit of a perverse incentive”. That type of comment might apply in some other thread about some other release, but it doesn’t belong in this one.
Being _perceived_ as having the best LLM/chatbot is a billion dollar game now. And it is an ongoing race, at breakneck speeds. These companies are likely gaming the metrics in any and all ways that they can. Of course there are probably many working on genuine improvements also. And at the frontier it can be very difficult to separate "hack" from "better generalized performance". But that is much harder, so might be the minority in terms of practical impact already.
It is a big problem for researchers at least that we/they do know what is in the training data and how that process works. Figuring out if there are (for example) data leaks or overeager preference tuning, that caused performance to get better for a given task is extremely difficult with these giganormous black boxes.
You have potentially billions of dollars to gain, no way to be found out… it’s a good idea to initially assume there’s cheating and work back from there.
It’s not quite as bad as “no way to be found out”. There are evals that suss out contamination/training on the test set. Science means using every available means to disprove, though. Incredible claims etc
1 reply →
Intelligence is so vaguely defined and has so many dimensions that it is practically impossible to assess. The only approximation we have is the benchmarks we currently use. It is no surprise that model creators optimize their models for the best results in these benchmarks. Benchmarks have helped us drastically improve models, taking them from a mere gimmick to "write my PhD thesis." Currently, there is no other way to determine which model is better or to identify areas that need improvement.
That is to say, focusing on scores is a good thing. If we want our models to improve further, we simply need better benchmarks.
According to this very model there a "mere technicalities" differentiate human and AI systems ...
Current AI lacks:
First-person perspective simulation Continuous self-monitoring (metacognition error <15%) Episodic future thinking (>72h horizon) Episodic Binding (Memory integration): Depends on: Theta-gamma cross-frequency coupling (40Hz phase synchronization) Dentate gyrus pattern separation (1:7000 distinct memory encoding) Posterior cingulate cortex (reinstatement of distributed patterns)
AI's failure manifests in:
Inability to distinguish similar-but-distinct events (conceptual blending rate ~83%) Failure to update prior memories (persistent memory bias >69%) No genuine recollection (only pattern completion) Non-Essential (Emotional Valence) While emotions influence human storytelling:
65% of narrative interpretations vary culturally Affective priming effects decay exponentially (<7s half-life) Neutral descriptions achieve 89% comprehension accuracy in controlled studies The core computational challenge remains bridging:
Symbolic representation (words/syntax) Embodied experience (sensorimotor grounding) Self-monitoring (meta-narrative control) Current LLMs simulate 74% of surface narrative features but lack the substrate for genuine meaning-making. It's like generating symphonies using only sheet music - technically accurate, but devoid of the composer's lived experience.
Could you share a reference for those wanting to learn more?
1 reply →
Benchmark scores are table stakes - necessary but not sufficient to demonstrate the capabilities of a model. Casual observers might just look at the numbers, but anyone spending real money on inference will run their own tests on their own problems. If your model doesn't perform as it should, you will be found out very quickly.
Zero trust in benchmarks without opening model's training data. It's trivial to push results up with spoiled training data.
Ironic and delicious, since this is also how the public education system in the US is incentivized.
A comparison of testing criticality across countries would be interesting to read if someone knows a decent reference. My sense (which I don't trust) is that test results matter at-least-as much or more in other places than they do in the US. For example, are England's A-levels or China's gaokao tests or Germany's Abitur tests more or less important than US SATs/ACTs?
Goodhart's law - https://en.wikipedia.org/wiki/Goodhart%27s_law
They probably stopped talking about their datasets because it would mostly piss people off and get them sued. EG, Meta.
This is already a problem for years in AI.