Comment by VHRanger
13 hours ago
The issue is that benchmarks that look insightful will end up being gamed by labs quickly (Goodharts law)
The best LLM benchmarks test around the margins of those behaviors, tasks that are difficult and correlate with usefulness while being removed enough to stay unpolluted
No comments yet
Contribute on Hacker News ↗