← Back to context

Comment by fghorow

8 days ago

Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.

[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...

I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"

  • > One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!"

    100%

    In ChatGPT I have the Basic Style and Tone set to "Efficient: concise and plain". For Characteristics I've set:

    - Warm: less

    - Enthusiastic: less

    - Headers and lists: default

    - Emoji: less

    And custom instructions:

    > Minimize sycophancy. Do not congratulate or praise me in any response. Minimize, though not eliminate, the use of em dashes and over-use of “marketing speak”.

    • Yeah why are basically all models so sycophantic anyway. I'm so done with getting encouragement and appreciation of my choices even when they're clearly wrong.

      I tried similar prompts but they didn't really work.

  • > Like, all the time chatGPT is like "Great question!".

    I've been trying out Gemini for a little while, and quickly got annoyed by that pattern. They're overly trained to agree maximally.

    However, in the Gemini web app you can add instructions that are inserted in each conversation. I've added that it shouldn't assume my suggestions as good per default, but offer critique where appropriate.

    And so every now and then it adds a critique section, where it states why it thinks what I'm suggesting is a really bad idea or similar.

    It's overall doing a good job, and I feel it's something it should have had by default in a similar fashion.

Do I feel bad for the above person.

I do. Deeply.

But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.

The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.

https://archive.is/fuJCe

(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)

  • We probably shouldn't be using the "archive" site that hijacks your browser into DDOSing other people. I'm actually surprised HN hasn't banned it.

    • Some of us have, and some of us still use it. The functionality and the need for an archive not subject to the same constraints as the wayback machine and other institutions outweighs the blackhat hijinks and bickering between a blogger and the archive.is person/team.

      My own ethical calculus is that they shouldn't be ddos attacking, but on the other hand, it's the internet equivalent of a house egging, and not that big a deal in the grand scheme of things. It probably got gyrovague far more attention than they'd have gotten otherwise, so maybe they can cash in on that and thumb their nose at the archive.is people.

      Regardless - maybe "we" shouldn't be telling people what sites to use or not use -if you want to talk morals and ethics, then you better stop using gmail, amazon, ebay, Apple, Microsoft, any frontier AI, and hell, your ISP has probably done more evil things since last tuesday than the average person gets up to in a lifetime, so no internet, either. And totally forget about cellular service. What about the state you live in, or the country? Are they appropriately pure and ethical, or are you going to start telling people they need to defect to some bastion of ethics and nobility?

      Real life is messy. Purity tests are stupid. Use archive.is for what it is, and the value it provides which you can't get elsewhere, for as long as you can, because once they're unmasked, that sort of thing is gone from the internet, and that'd be a damn shame.

      2 replies →

    • I can't find the claimed JS in the page source as of now, and also it displays just fine with JS disabled.

    • I’d be happy if people stop linking to paywalled sites in the first place. There’s usually a small blog on the same topic and ironically the small blogs poster here are better quality.

      But otherwise, without an alternative, the entire thread becomes useless. We’d have even more RTFA, degrading the site even for people who pay for the articles. I much prefer keeping archive.today to that.

    • eh, both ArchiveToday and gyrovague are shit humans. Its really just a conflict in between two nerds not "other people".

      They need to just hug it out and stop doxing each other lol

[flagged]

  • They're in an impossible situation they created themselves and inflict on the rest of us. Forgive us if we don't shed any tears for them.

    • Sure - so is Google Chrome for abetting them with a browser, and Microsoft for not using their Windows spyware to call suicide hotline.

      I don't empathize with any of these companies, but I don't trust them to solve mental health either.

      2 replies →

  • The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil