Comment by moojacob
3 months ago
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
3 months ago
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
Via https://news.ycombinator.com/item?id=42710976
You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
This is a purposefully naive take.
They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).
OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.
> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
The solution is regulation
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
1 reply →
I'd be interested to learn what fraction of ChatGPT revenue is from this kind of user.
OpenAI absolutely does that. That's what led to the absurd sycophancy (https://www.bbc.com/news/articles/cn4jnwdvg9qo) that they then pulled back on.
One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.