Comment by anotherpaulg

5 months ago

I try not to let perfect be the enemy of good. All benchmarks have limitations.

The Exercism problems have proven to be very effective at measuring an LLM's ability to modify existing code. I receive a lot of feedback that the aider benchmarks correlate strongly with people's "vibes" on model coding skill. I agree. The scores have felt quite aligned with my hands-on experience coding with most of the top models over the last 18+ months.

To be clear, the purpose of the benchmark is to help me quantitatively assess and improve aider and make it more effective. But it's also turned out to be a great way to measure the coding skill of LLMs.

> The Exercism problems have proven to be very effective at measuring an LLM's ability to modify existing code

The Aider Polyglot website also states that the benchmark " ...asks the LLM to edit source files to complete 225 coding exercises".

However, when looking at the actual tests [0], it is not about editing code bases, it's rather just solving simple programming exercies? What am I missing?

[0] https://github.com/Aider-AI/polyglot-benchmark

>I try not to let perfect be the enemy of good. All benchmarks have limitations.

Overfitting is one of the fundamental issues to contend with when trying to figure out if any type of model at all is useful. If your leaderboard corresponds to vibes and that is your target, you could just have a vibes leaderboard

That's my perception as well. Most of the time, most of the devs I know, including myself, are not really creating novelty with the code itself, but with the product. (Sometimes even the product is not novel, just a similar enhanced version of existing products)

If the resulting code is not trying to be excessively clever or creative this is actually a good thing in my book.

The novelty and creativity should come from the product itself, especially from the users/customers perspective. Some people are too attached to LLM leaderboards being about novelty. I want reliable results whenever I give the instructions, either be the code, or the specs built into a spec file after throwing some ideas into prompts.