← Back to context

Comment by dlivingston

17 hours ago

> And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool

Is this actually true? Would appreciate further reading on this if you have it.

I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.

They still want people to stick around and 'bond' for lack of a better term with their particular style of chat bot. Like so many venture funded money pits of old the cash burn now is about customer acquisition while they develop and improve their tech. They're all racing toward a cliff hoping to either make the jump to the stratosphere and start turning massive profits or to fall off and splat on the rocks of bankruptcy. If they don't get the engagement loop right now they won't have the customers if the tech and use case catch up with the hype and you can only tweak these models so much after they're created so they have to refine the engagement hooks now along side the core tech.