Comment by aseg
5 days ago
This is my research area. I just finished reviewing six NeurIPS papers (myself, no LLM involved) on LLM Agents for discovery and generation and I'm finding that evaluating LLM agents on raw performance for a task isn't as insightful anymore -- every paper is claiming state of the art 10x performance boost by {insert random acronym that devolves into combinatorial search}. Rather the true test for such algorithms is whether the empirical scaling curves for these algorithms are more computationally amenable than an existing baseline search algorithm (like CoT).
Three motivating points:
- GEPA / evolutionary agents are performing a zero-th order (no gradient) optimization in a combinatorial space. Their loss curves are VERY noisy and stochastic. If we run such agents multiple times, the performance variance is extremely high -- and in some cases cancels out the gains from single experiment. However, obtaining the error bounds is hard because the API costs are pretty restrictive.
- The problem we face with test time scaling is not that prompt engineering is ineffective/less effective than fine-tuning. It is that fine-tuning _reliably_ increases performance for a model for any subset of tasks and the scaling curves for performance per additional data token are well understood.
- Test time optimization techniques work well on in-distribution problems (e.g. generate and debug this Python code) but fail pretty badly on even slightly out of distribution problems (e.g. generate and debug this Julia code). Compare this to gradient search -- it wouldve been so fascinating and confusing if SGD failed to optimize a CNN image classifier on COCO but worked very well on ImageNet.
How do people feel about this? Does this line up with your viewpoints?
mostly aligned on this. couple of thoughts:
- raw accuracy is now a "vanity" metric. so the benchmarks need to get more sophisticated, and i think they're going to have to be far more task specific than hotpot or hover. they've become like the mnist of multi hop.
- in my use of MIPROv2 and SIMBA, I see a fair amount of improvements for multi hop tasks (published some of these on hn before). I'm going to try GEPA and see how it performs. so I think we're at the start of what I would call "meta learning".. tuning across a huge search surface rather than tweaking one prompt. hyper param search for higher dim spaces.
- tokens burned should be a reported result
I can't comment on your detailed knowledge of the state of the art, but your points resonate (particularly because I have tried to generate Julia and Lean code).
So, as with any less informed user reviewing LLM output, what you say definitely sounds plausible and correct.
Do the problems you highlighted still appear with higher quality training data?