Comment by Eddy_Viscosity2
6 hours ago
I hadn't thought of it like that, but it makes sense. The LLMs are essentially bred for the ones which give the 'best' answers (best fit to the test-takers expectation), which isn't always the 'right' answer. A parallel might be media feed algorithms which are bred to give recommendations with the most 'engagement' rather than the most 'entertainment'.
No comments yet
Contribute on Hacker News ↗