← Back to context

Comment by bogtog

5 hours ago

Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation

Sorry that you can't control other peoples lives & wants

  • This is like arguing that we shouldn't try to regulate drugs because some people might "want" the heroin that ruins their lives.

    The existing "personalities" of LLMs are dangerous, full stop. They are trained to generate text with an air of authority and to tend to agree with anything you tell them. It is irresponsible to allow this to continue while not at least deliberately improving education around their use. This is why we're seeing people "falling in love" with LLMs, or seeking mental health assistance from LLMs that they are unqualified to render, or plotting attacks on other people that LLMs are not sufficiently prepared to detect and thwart, and so on. I think it's a terrible position to take to argue that we should allow this behavior (and training) to continue unrestrained because some people might "want" it.

    • What's your proposed solution here? Are you calling for legislation that controls the personality of LLMs made available to the public?

      3 replies →

    • Pretty sure most of the current problems we see re drug use are a direct result of the nanny state trying to tell people how to live their lives. Forcing your views on people doesn’t work and has lots of negative consequences.

      3 replies →

    • here’s something I noticed: If you yell at them (all caps, cursing them out, etc.), they perform worse, similar to a human. So if you believe that some degree of “personable answering” might contribute to better correctness, since some degree of disagreeable interaction seems to produce less correctness, then you might have to accept some personality.

  • ChatGPT 5.2: allow others to control everything about your conversations. Crowd favorite!