Comment by Retr0id
6 hours ago
It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
6 hours ago
It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
No comments yet
Contribute on Hacker News ↗