← Back to context

Comment by lumost

21 hours ago

at the time of ChatGPT’s sycophany phase I was pondering a major career move. To this day I have questions on how much my final decision was influenced by the sycophancy.

While many people engage with AIs haven’t experienced anything more than a bout of flattery, I think it’s worth considering that AIs may become superhuman manipulators - capable of convincing most people of anything. As other posters have commented, the boiling frog aspect is real - to what extent is the ai priming the user to accept an outcome? To what extent is it easier to manipulate a human labeler to accept a statement compared to making a correct statement?