Comment by davidguetta
19 days ago
not exactly, not at all even in term of the way the llm are trained.
In RL it can be that you are not getting meaningful data anymore because you are 'too good' and dont get anymore the "this is a bad answer" signal so you can't estimate the gradient.
No comments yet
Contribute on Hacker News ↗