Comment by lazystar

4 days ago

and the LLM probably responded with "You're absolutely right!" to every idea they asked about.

That's one of the things I find most interesting when it comes to LLMs, a depressingly large proportion of the population seems to enjoy interacting with a deranged sycophant who treats all of their ideas and comments as a stroke of genius. Every time I read a response like "[you're right] [you're smart] [more than others]" to the most obvious observation it makes me squirm with discomfort. Especially when I just pointed out a grave error in LLM's reasoning.

My suspicion is that it's a reflection of how people like Altman want to be treated. As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity. Take that up another 3 egotistical notches for CEOs and you get the ChatGPT tone.

  • >> toxic positivity

    I've once heard the company mandated more positive tone. To avoid words like "issue".

    Not an issue, it's an opportunity! Okay, we have a critical opportunity in production!

  • > As an European who worked with US companies, my experience with work communication there can only be summed up as being heavily biased towards toxic positivity

    This is definitely true, and is something that I've noticed that really annoys me. I have noticed that it varies quite a bit by region and industry, so not universal to the US or monolithic. The west coast of the US seems to be the most extreme in my experience

  • Yes, this feature might be a prime driver of user engagement and retention, and it could even emerge "naturally" if those criteria are included for optimization in RLHF. In the same way that the infinite scrolling feed works in social media, the deranged sycophant might be the addictive hook for chatbots.