Comment by hexaga

3 months ago

Were those trained using RLHF? IIRC the earliest models were just using SFT for instruction following.

Like the GP said, I think this is fundamentally a problem of training on human preference feedback. You end up with a model that produces things that cater to human preferences, which (necessarily?) includes the degenerate case of sycophancy.