Comment by tedsanders

3 hours ago

What do you mean by this? We don’t train on evals, and if we did I’d quit on the spot.

(The loose version of this that’s true is that there may exist eval data contamination in pretraining. This is a hard problem to fully solve.)

its not that loose of a version. its the reality and as probably is surely a focus of a dedicated post training RL-ing these kind of githubs. of course you would train specifically on the task. you would mix this eval data with others in thousands of githubs repos.