Comment by siva7
8 hours ago
Could it really be that not only we vibeslop all apps nowadays but also don't care to even check how ai solved a benchmark it claimed solved?
8 hours ago
Could it really be that not only we vibeslop all apps nowadays but also don't care to even check how ai solved a benchmark it claimed solved?
Every ai labs train on the test set. That is a big part of why we see benchmark climbing from 1% to 30% after a few models iterations
Models themselves definitely aren't getting better.
Frontier model developers try to check for memorization. But until AI interpretability is a fully solved problem, how can you really know whether it actually didn't memorize or your memorization check wasn't right?
Probably a more interesting benchmark is one that is scored based on the LLM finding exploits in the benchmark.