← Back to context Comment by mjburgess 4 hours ago The first anthropomorphization of AI which is actually useful. 1 comment mjburgess Reply Retr0id 4 hours ago It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
Retr0id 4 hours ago It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"