Comment by cpard
21 hours ago
Benchmarks/evals are really hard and they become harder when there’s huge incentive to game them at an industry scale.
ELT-Bench is another recent example. It was the first serious attempt at a benchmark for data engineering workloads, published about a year ago.
A few days ago, a follow-up paper from a group that includes one of the original authors audited the benchmark itself. The team gfound that the benchmark has structural issues that biased results.
Here’s the paper: https://arxiv.org/abs/2603.29399
None of these are new though, the industry has gone through all that before just in a smaller scale and there’s a lot to learn from that. Here’s a post I wrote on the parallels we see today to what happened with the benchmarketing wars of the database systems.
https://www.typedef.ai/blog/from-benchmarketing-to-benchmaxx...
It’s just hard to make them not part of the training data. We see this a bit with BrowseComp plus and other deep research datasets. Not because frontier labs are trying to cheat, but just from training on the full web.
You need new datasets perpetually.
That’s true. it also depends heavily on the type of task, not everything is equally represented on the web today and it remains to be seen if this is going to change or not.
Or hidden benchmarks, though it's then harder to get people to trust the results.
How do you hide them if you aren't self hosting the model?
The trust issue might be solved by having standardisation bodies created, similar to W3C or even TPC, although TPC didn’t end that well.
Database benchmarks are another.
I have empirical experience though building classifiers that can have no precision measurement because the classifier performs invariably better than humans. They become the state of the art benchmark themselves and can’t be benchmarked except against themselves. These are for tasks that are non trivial and complex, but less logical than coding and less sustained reasoning. There may come a day though, when there is no calibrated benchmark that is independent of the models it’s measuring.
Would creating new benchmarks every month solve this problem?
Or create "blind" benchmarks.
10 groups of 3 researchers, all have their own benchmarks that they do not share (testing it without the authors knowing is a different problem, maybe they only run the benchmarks when the gen-pop has access to the models).
that's 10 different tests. Aggregate pass rates