Comment by m101
3 months ago
Do you think this was an effect of this type of behaviour simply maximising engagement from a large part of the population?
3 months ago
Do you think this was an effect of this type of behaviour simply maximising engagement from a large part of the population?
Sort of. I thought the update felt good when it first shipped, but after using it for a while, it started to feel significantly worse. My "trust" in the model dropped sharply. It's witty phrasing stopped coming across as smart/helpful and instead felt placating. I started playing around with commands to change its tonality where, up to this point, I'd happily used the default settings.
So, yes, they are trying to maximize engagement, but no, they aren't trying to just get people to engage heavily for one session and then be grossed out a few sessions later.
I kind of like that "mode" when i'm doing something kind of creative like brainstorming ideas for a D&D campaign -- it's nice to be encouraged and I don't really care if my ideas are dumb in reality -- i just want "yes, and", not "no, but".
It was extremely annoying when trying to prep for a job interview, though.
Yikes. That's a rather disturbing but all to realistic possibility isn't it. Flattery will get you... everywhere?
Yes, a huge portion of chatgpt users are there for “therapy” and social support. I bet they saw a huge increase in retention from a select, more vulnerable portion of the population. I know I noticed the change basically immediately.
Would be really fascinating to learn about how the most intensely engaged people use the chatbots.
> how the most intensely engaged people use the chatbots
AI waifus - how can it be anything else?