Comment by bogtog
3 months ago
Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation
3 months ago
Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation
Hey, you leave my sycophantic robot friend alone.
Sorry that you can't control other peoples lives & wants
This is like arguing that we shouldn't try to regulate drugs because some people might "want" the heroin that ruins their lives.
The existing "personalities" of LLMs are dangerous, full stop. They are trained to generate text with an air of authority and to tend to agree with anything you tell them. It is irresponsible to allow this to continue while not at least deliberately improving education around their use. This is why we're seeing people "falling in love" with LLMs, or seeking mental health assistance from LLMs that they are unqualified to render, or plotting attacks on other people that LLMs are not sufficiently prepared to detect and thwart, and so on. I think it's a terrible position to take to argue that we should allow this behavior (and training) to continue unrestrained because some people might "want" it.
What's your proposed solution here? Are you calling for legislation that controls the personality of LLMs made available to the public?
10 replies →
Pretty sure most of the current problems we see re drug use are a direct result of the nanny state trying to tell people how to live their lives. Forcing your views on people doesn’t work and has lots of negative consequences.
5 replies →
Comparing LLM responses to heroine is insane.
6 replies →
Disincentivizing something undesirable will not necessarily lead to better results, because it wrongly assumes that you can foresee all consequences of an action or inaction.
Someone who now falls in love with an LLM might instead fall for some seductress who hurts him more. Someone who now receives bad mental health assistance might receive none whatsoever.
3 replies →
Who are you to determine what other people want? Who made you god?
1 reply →
here’s something I noticed: If you yell at them (all caps, cursing them out, etc.), they perform worse, similar to a human. So if you believe that some degree of “personable answering” might contribute to better correctness, since some degree of disagreeable interaction seems to produce less correctness, then you might have to accept some personality.
2 replies →
ChatGPT 5.2: allow others to control everything about your conversations. Crowd favorite!
so good.
You’re getting downvoted but I agree with the sentiment. The fact that people want a conversational robot friend is, I think, extremely harmful and scary for humanity.
Giving people what makes them feel good in the short term is not actually necessarily a good thing. See also: cigarettes, alcohol, gambling, etc.