Comment by yathaid
2 days ago
Thanks for replying, hope it wasn't too critical.
>> But in the limit of tokens generated, the chance that they generate the correct answer still decays to zero.
I don't understand this assertion though.
Lecun's thesis was errors just accumulate.
Reasoning models accumulate errors, track back and are able to reduce it back down.
Hence the hypothesis of errors accumulating (at least asymptotically) is false.
What is the difference between "Probability of correct answer decaying to zero" and "Errors keep accumulating" ?
No comments yet
Contribute on Hacker News ↗