← Back to context

Comment by rnd0

3 months ago

The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?

I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?

What I suspect is that they kept fine-tuning on "successful" user chats, recycling them back into the system - probably with filtering of some sort, but not enough to prevent turning it into a self-realization cult supporter. People become heavy users of the service when they fall into this pattern, and I guess that's something the company optimized for.