Comment by jerf
1 day ago
When I was being a bad HN reader and just reacting to the title, my initial impulse was to be placating, and observe that they are probably just immature. After all, for all that has happened, this is still only a couple year's worth of development, and it does tend to take a long time to develop good benchmarks.
However the article does seem to be pointing out some fundamental issues. I'm particularly annoyed by using LLMs to evaluate the output of LLMs. Anyone with enough experience to be writing benchmarks of this sort in the first place ought to know that's a no-go. It isn't even just using "AI to evaluate AI" per se, but using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test. As we, at the moment, lack a diversity of AI architectures that can play on the same level as LLMs, it is simply necessary for the only other known intelligence architecture, human brains, to be in the loop for now, however many other difficulties that may introduce into the testing procedures.
Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.
When I was working in audio compression, evaluation was very painful because we had no programmatic way to measure how good some reconstructed audio sounds to a human. Any metric you could come up with was gameable, and direct optimization would lead to artifacts.
As a result, we always had a two-step evaluation process. We would use a suite of metrics to guide development progress (validation), but the final evaluation reported in a paper always involved subjective human listening experiments. This was expensive, but the only way to show that the codecs were actually improving.
Similarly, here it seems fine to use LLMs to judge your work in progress, but we should be requiring human evaluation for 'final' results.
Wouldn't that process avoid you finding a better subjective audio codec that doesn't reduce typical metrics (PSNR etc.) ? another process would rather be to first construct a metric software that tries to be similar to the subjective experience of humans, then use that to create audio codecs optimizing this metric
There's two answers to that....
The first is, how do you know the subjective optimization your making is actually any good? You're just moving the problem back one layer of abstraction.
The second is, we did that, eventually, by training models to predict subjective listening scores from the giant pile of subjective test data we had collected over the years. (ViSQoL) It's great, but we still don't trust it for end-of-the-day, cross codec comparison, because we don't want to reward overfit on the trained model.
https://arxiv.org/abs/2004.09584
1 reply →
You are describing psychoacoustic models, which work to a reasonable extent for lossy compression of audio (MP3 and successors are based on them), but I can see how it would be much more difficult/less helpful for reconstructing audio.
You gotta snag yourself one of those awesome KEMAR dummy head and torso simulators, preferably the fully accessorized luxury edition that comes with the heavy duty portable travel case with lots of room for extra ears and microphones and wigs, which is so much fun to take through airport security.
They were great for taking to Grateful Dead concerts to record the music directly in front of the Wall of Sound, and to measure the response so you can play back all your Dead tapes with that same front row psychoacoustic perspective. ;)
https://www.grasacoustics.com/industries/kemar/applications-...
https://www.grasacoustics.com/products/accessories/product/4...
LLMs evaluating LLM outputs really isn’t that dire…
Discriminating good answers is easier than generating them. Good evaluations write test sets for the discriminators to show when this is or isn’t true. Evaluating the outputs as the user might see them are more representative than having your generator do multiple tasks (e.g. solve a math query and format the output as a multiple choice answer).
Also, human labels are good but have problems of their own, it isn’t like by using a “different intelligence architecture” we elide all the possible errors. Good instructions to the evaluation model often translate directly to better human results, showing a correlation between these two sources of sampling intelligence.
> Discriminating good answers is easier than generating them.
I don't think this is true for many fields - especially outside of math/programming. Let's say the task is "find the ten most promising energy startups in Europe." (This is essentially the sort of work I see people frequently talk about using research modes of models for here or on LinkedIn.)
In ye olden days pre-LLM you'd be able to easily filter out a bunch of bad answers from lazy humans since they'd be short, contain no detail, have a bunch of typos, formatting inconsistencies from copy-paste, etc. You can't do that for LLM output.
So unless you're a domain expert on European energy startups you can't check for a good answer without doing a LOT of homework. And if you're using a model that usually only looks at, say, the top two pages of Google results to try to figure this out, how is the validator going to do better than the original generator?
And what about when the top two pages of Google results start turning into model-generated blogspam?
If your benchmark can't evaluate prospective real-world tasks like this, it's of limited use.
A larger issue is that once your benchmark, that used this task as a criteria, based on an expert's knowledge, is published, anyone making an AI Agent is incredibly incentivized to (intentionally or not!) to train specifically on this answer without necessarily actually getting better at the fundamental steps in the task.
IMO you can never use an AI agent benchmark that is published on the internet more than once.
> You can't do that for LLM output.
That's true if you're just evaluating the final answer. However, wouldn't you evaluate the context -- including internal tokens -- built by the LLM under test ?
In essence, the evaluator's job isn't to do separate fact-finding, but to evaluate whether the under-test LLM made good decisions given the facts at hand.
1 reply →
> Good evaluations write test sets for the discriminators to show when this is or isn’t true.
If they can’t write an evaluation for the discriminator I agree. All the input data issues you highlight also apply to generators.
> IMO you can never use an AI agent benchmark that is published on the internet more than once.
This is a long-solved problem far predating AI.
You do it by releasing 90% of the benchmark publicly and holding back 10% for yourself or closely trusted partners.
Then benchmark performance can be independently evaluated to determine if performance on the 10% holdback matches the 90% public.
> Discriminating good answers is easier than generating them.
Lots of other good replies to this specific part, but also, lots of developers are struggling with the feeling that reviewing code is harder than writing code (something I personally not sure I agree with), seen that sentiment being shared here on HN a lot, and would directly go against that particular idea.
I wish the other replies and this would engage with the sentence right after it indicating that you should test this premise empirically.
> Discriminating good answers is easier than generating them.
This is actually very wrong. Consider for instance the fact that people who grade your tests in school are typically more talented, capable, trained than the people taking the test. This is true even when an answer key exists.
> Also, human labels are good but have problems of their own,
Granted, but...
> it isn’t like by using a “different intelligence architecture” we elide all the possible errors
nobody is claiming this. We elide the specific, obvious problem that using a system to test itself gives you no reliable information. You need a control.
It isn’t actually very wrong. Your example is tangential as graders in school have multiple roles — teaching the content and grading. That’s an implementation detail, not a counter to the premise.
I don’t think we should assume answering a test would be easy for a Scantron machine just because it is very good at grading them, either.
Trading control for convenience has always been the tradeoff in the recent AI hype cycle and the reason why so many people like to use ChatGPT.
What's 45+8? Is it 63?
If this sort of error isn’t acceptable, it should be part of an evaluation set for your discriminator
Fundamentally I’m not disagreeing with the article, but also think most people who care take the above approach because if you do care you read samples, find the issues, and patch them to hill climb better
Agree, current "thinking" models are effectively "re-run this question N times, and determine the best answer", and this LLM-evaluating-LLM loop demonstrably leads to higher quality answers against objective metrics (in math, etc).
That’s… not how thinking models work. They tend to be iterative and serial, not parallel and then pick-one.
1 reply →
> "I'm particularly annoyed by using LLMs to evaluate the output of LLMs."
+1, and IMO part of a general trend where we're just not serious about making sure this shit works. Higher scores make stonks go up, who cares if it actually leads to reliably working products.
But also more importantly it's starting to expose the fact that we haven't solved one of ML's core challenges: data collection and curation. On the training side we have obviated this somewhat (by ingesting the whole internet, for example), but on the eval side it feels like we're increasing just going "actually constructing rigorous evaluation data, especially at this scale, would be very expensive... so let's not".
I was at a local tech meetup recently where a recruiting firm was proudly showing off the LLM-based system they're using to screen candidates. They... did not evaluate the end-to-end efficacy of their system. At all. This seems like a theme within our industry - we're deploying these systems based purely on vibes without any real quantification of efficacy.
Or in this case, we're quantifying efficacy... poorly.
> +1, and IMO part of a general trend where we're just not serious about making sure this shit works.
I suspect quite a lot of the industry is actively _opposed_ to that, because it could be damaging for the "this changes everything" narrative.
> I'm particularly annoyed by using LLMs to evaluate the output of LLMs
This does seem a little crazy on its face, but it is yielding useful and improving tools.
It's not about it being crazy and it's not about personal opinions about AI. It's about chaos mathematics. Iterating with the same system like that has certain easy-to-understand failure states. It's why I phrased it specifically in terms of using the same architecture to validate itself. If we had two radically different AI architectures that were capable of evaluating each other, firing them at each other for evaluation purposes would be much, much less susceptible to this sort of problem than firing either of them at themselves. That will never be a good idea.
See also a cousin comment of mine observing that human brains are absolutely susceptible to the same effect. We're just so used to it that it is the water we swim through. (And arguably human brains are more diverse than current AI systems functioning at this level. No bet on how long that will be true for, though.)
Such composite systems would still have their own characteristics and certainly wouldn't be guaranteed to be perfect or anything, but at least they would not tend to iteratively magnify their own individual flaws.
Perhaps someday we will have such diverse architectures. We don't today have anything that can evaluate LLMs other than human brains, though.
> using a judge of the same architecture as the thing being judged maximizes the probability of fundamental failure of the benchmark to be valid due to the judge having the exact same blind spots as the thing under test.
That's what humans do all the time. What's the fundamental difference? Or are you saying that's also broken?
The equivalent would be having the _same human_ review their own work. We require others with different experience and fresh eyes for secondary review and for the most important task multiple people.
To some extent the same llm with a new context history and different prompt is sorta like that ... but still is much weaker than using a different system entirely.
How do you feel about o3 reviewing 4o-mini?
Yes, humans evaluating humans also causes human foibles to be magnified.
I cite the entire current education system. Substantiating that claim would take more than an HN comment allows, though I think most people can probably get the drift of what I'm talking about, even if we'd disagree about the details. Absolutely humans are not immune to this.
I also cite the entire concept of "fallacies", many of which are things that both human brains tend to produce and then tend to evaluate poorly. An alien species might find some of our fallacies absolutely transparent, and have entirely different fallacies of their own that none of us would find convincing in the slightest, because of fundamentally different brain architectures.
I don't think AIs are ready for this yet and I don't expect LLMs ever will be, but in the future getting an outsider perspective from them in a sort of Mixture of Experts architecture could be valuable for life decisions. (I look to the future AI architectures in which LLMs are just a component but not the whole.)
... I mean, when evaluating "45 + 8 minutes" where the expected answer was "63 minutes", as in the article, a competent human reviewer does not go "hmm, yes, that seems plausible, it probably succeeded, give it the points".
I know LLM evangelists love this "humans make mistakes too" line, but, really, only an _exceptionally_ incompetent human evaluator would fall for that one.
have you ever hired human evaluators at scale? They make all sorts of mistakes. Relatively low probability, so it’s a noise factor in, but I have yet to meet the human who is 100% accurate at simple tasks done thousands of times.
2 replies →
We want machines that are better than humans, otherwise what purpose do they serve?
A machine with human level "AI" is still useful if it can run 24/7 and you can spin up 1M instances.
2 replies →
> Tests that a "do nothing" AI can pass aren't intrinsically invalid but they should certainly be only a very small number of the tests. I'd go with low-single-digit percentage, not 38%. But I would say it should be above zero; we do want to test for the AI being excessively biased in the direction of "doing something", which is a valid failure state.
There is a simple improvement here: give the agent a "do nothing" button. That way it at least needs to understand the task well enough to know it should press the do nothing button.
Now a default agent that always presses it still shouldn't score 38%, but that's better than a NOP agent scoring 38%.
> I'm particularly annoyed by using LLMs to evaluate the output of LLMs.
Even though I largely agree with parts of what you wrote, if you squint your eyes enough you can kind of see an argument along the lines of “difficult to solve but easy to verify.”
Benchmarks in software have always been bullshit. AI benchmarks are just even more bullshit since they're trying to measure something significantly more subjective and nuanced than most.
It's like using steel to produce steel. What else are you going to use? Bamboo?
I'm not sure if I'm dense, but we don't use steel to make steel (whether crucibles or "feed material").
The first person to make steel made it without steel didn't they?
Did I miss something?
Edit0: fun tidbit - Wootz steel was made with crucibles of clay with rice husks mixed in (husks would carbonize quickly and introduce air layers to better isolate) and many seemingly random objects (fruits, vegetation) were added to the crucible to control carbon content.
I higly recommend A Collection of Unmitigated Pedantry's series on steel (it's a blog, just search "ACOUP steel".
Second fun tidbit : Bamboo was used as the fuel source in some furnaces - they did indeed use bamboo like the parent comment mentionned.
It's more like using a faulty and dangerous automated foundry to make steel when you could just hire steelworkers.
That's the real problem here - these companies are swimming in money and have armies of humans working around the clock training LLMs, there is no honest reason to nickel-and-dime the actual evaluation of benchmarks. It's like OpenAI using exact text search to identify benchmark contamination for the GPT-4 technical report. I am quite certain they had more sophisticated tools available.