Comment by jsheard
3 months ago
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Fear that TikTok was doing exactly this was widespread enough for Congress to pass a law forbidding it.
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.
If there isn't shared belief, then it's some type of delusional disorder, perhaps a special form of Folie a deux.
This is interesting.
I agree when the influence is mental health or society based.
But an AI persona is a bit interesting. I guess the closest proxy would be a manipulative spouse?
1 reply →
You are a conspiracy theorist and a liar! /s
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
> Less combative, less jokey, less provocative.
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.
Humanity is a fine thread between "lobotomized drones" (divided on two sides, sounds familiar?) and "aggressive clowns" (no respect, get provoked by anything, can't see an inch over their faces). Of course, it's more than a single spectrum, there's more to it than social behavior.
It could have been better than this, but there is no option now.
I can play either of those extremes and thrive. Can you?
On what basis do you assume that that isn't exactly what "safety alignment" means, among other things?