Comment by int_19h
13 days ago
That paper again.
LLMs have been trained on synthetic outputs for quite a while since then and they do get better.
Turns out there's more to it than that.
13 days ago
That paper again.
LLMs have been trained on synthetic outputs for quite a while since then and they do get better.
Turns out there's more to it than that.
No comments yet
Contribute on Hacker News ↗