Comment by aeternum
2 years ago
The issue is you really need to create a brand new benchmark with each release.
Users will invariably test variants of existing benchmarks/questions and thus they will be included in the next training run.
Academia isn't used to using novel benchmark questions every few months so will have trouble adapting.
Then its not really a benchmark? Model trainers and researchers are not continuously testing, they dump something then move on.
The answer is standard "secret" closed source tests, performed in a controlled environment.
I know, I don't like the sound of it either, but in this case I think closed source + a single overseeing entity is the best solution, by far. Facebook already made something like this, but they only went halfway (publishing the questions while keeping the answers secret).
Interestingly, the college board might be the best entity to do this.
Colleges are apparently no longer using standardized tests so why not put that towards the AI?
It's really exactly what we need. Novel questions with minimal re-use created and curated by an independent team of experts designed to assess general intelligence across multiple dimensions.
The trick is to hide the answers to the test data with an authority that only reports your score, like Kaggle does. And then only allow a single submission for each new model to avoid data leakage. I find it a bit sad that this practice has fallen by the wayside, as it went pretty mainstream within the research community with the Netflix Prize back in 2009.
I wonder if techniques from differential privacy could be helpful here (in terms of the multiple-querying problem).