Comment by somebodythere
9 months ago
My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.
9 months ago
My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.
No comments yet
Contribute on Hacker News ↗