← Back to context

Comment by TimTheTinker

16 hours ago

I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).

Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.

For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.

In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)

  • I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.

    GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.

Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.

It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.

> LLMs activate human social wiring and have a powerful effect

Is this generally true, or is there a subset of people that are particularly susceptible?

It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.

I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).

  • I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".

    But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.

> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.

this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.

https://xkcd.com/386/

Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.

  • I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).

  • Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.

    The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.

    Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.

    It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.

    There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.