Comment by somebodythere
2 months ago
My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.
2 months ago
My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.
No comments yet
Contribute on Hacker News ↗