Comment by Retr0id
4 hours ago
It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
4 hours ago
It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
No comments yet
Contribute on Hacker News ↗