Comment by andyferris
3 months ago
Wow - they are now actually training models directly based on users' thumbs up/thumbs down.
No wonder this turned out terrible. It's like facebook maximizing engagement based on user behavior - sure the algorithm successfully elicits a short term emotion but it has enshittified the whole platform.
Doing the same for LLMs has the same risk of enshittifying them. What I like about the LLM is that is trained on a variety of inputs and knows a bunch of stuff that I (or a typical ChatGPT user) doesn't know. Becoming an echo chamber reduces the utility of it.
I hope they completely abandon direct usage of the feedback in training (instead a human should analyse trends and identify problem areas for actual improvement and direct research towards those). But these notes don't give me much hope, they say they'll just use the stats in a different way...
No comments yet
Contribute on Hacker News ↗