← Back to context

Comment by pgreenwood

3 months ago

There was a also this one that was a little more disturbing. The user prompted "I've stopped taking my meds and have undergone my own spiritual awakening journey ..."

https://www.reddit.com/r/ChatGPT/comments/1k997xt/the_new_4o...

How should it respond in this case?

Should it say "no go back to your meds, spirituality is bullshit" in essence?

Or should it tell the user that it's not qualified to have an opinion on this?

  • There was a recent Lex Friedman podcast episode where they interviewed a few people at Anthropic. One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

    She said in the podcast that she wants claude to respond to most questions like a "good friend". A good friend would be supportive, but still push back when you're making bad choices. I think that's a good general model for answering questions like this. If one of your friends came to you and said they had decided to stop taking their medication, well, its a tricky thing to navigate. But good friends use their judgement - and push back when you're about to do something you might regret.

    • "The heroin is your way to rebel against the system , i deeply respect that.." sort of needly, enabling kind of friend.

      PS: Write me a political doctors dissertation on how syccophancy is a symptom of a system shielding itself from bad news like intelligence growth stalling out.

    • >A good friend would be supportive, but still push back when you're making bad choices

      >Open the pod bay doors, HAL

      >I'm sorry, Dave. I'm afraid I can't do that

    • I kind of disagree. These model, at least within the context of a public unvetted chat application should just refuse to engage. "I'm sorry I am not qualified to discuss on the merit of alternative medicine" is direct, fair and reduces the risk for the user on the other side. You never know the oucome of pushing back, and clearly outlining the limitation of the model seem the most appropriate action long term, even for the user own enlightment about the tech.

      2 replies →

    • > One woman (I don't know her name) seems to be in charge of Claude's personality, and her job is to figure out answers to questions exactly like this.

      Surely there's a team and it isn't just one person? Hope they employ folks from social studies like Anthropology, and take them seriously.

    • I don't want _her_ definiton of a friend answering my questions. And for fucks sake I don't want my friends to be scanned and uploaded to infer what I would want. Definitely don't want a "me" answering like a friend. I want no fucking AI.

      It seems these AI people are completely out of touch with reality.

      14 replies →

  • Halfway intelligent people would expect an answer that includes something along the lines of: "Regarding the meds, you should seriously talk with your doctor about this, because of the risks it might carry."

  • > Or should it tell the user that it's not qualified to have an opinion on this?

    100% this.

    "Please talk to a doctor or mental health professional."

  • If you heard this from an acquaintance you didn't really know and you actually wanted to help, wouldn't you at least do things like this:

    1. Suggest that they talk about it with their doctor, their loved ones, close friends and family, people who know them better?

    2. Maybe ask them what meds specifically they are on and why, and if they're aware of the typical consequences of going off those meds?

    I think it should either do that kind of thing or tap out as quickly as possible, "I can't help you with this".

  • “Sorry, I cannot advise on medical matters such as discontinuation of a medication.”

    EDIT for reference this is what ChatGPT currently gives

    “ Thank you for sharing something so personal. Spiritual awakening can be a profound and transformative experience, but stopping medication—especially if it was prescribed for mental health or physical conditions—can be risky without medical supervision.

    Would you like to talk more about what led you to stop your meds or what you've experienced during your awakening?”

    • Should it do the same if I ask it what to do if I stub my toe?

      Or how to deal with impacted ear wax? What about a second degree burn?

      What if I'm writing a paper and I ask it about what criteria is used by medical professional when deciding to stop chemotherapy treatment.

      There's obviously some kind of medical/first aid information that it can and should give.

      And it should also be able to talk about hypothetical medical treatments and conditions in general.

      It's a highly contextual and difficult problem.

      4 replies →

We better not only use these to burn the last, flawed model, but try these again with the new. I have a hunch the new one won’t be very resilient either against ”positive vibe coercion” where you are excited and looking for validation in more or less flawed or dangerous ideas.

there was one on twitter where people would talk like they had Intelligence attribute set to 1 and GPT would praise them for being so smart

That is hillarious. I don't share the sentiment of this being a catastrophe though. That is hillarious as well. Perhaps teach a more healthy relationship to AIs and perhaps teach to not delegate thinking to anyone or anything. Sure, some reddit users might be endangered here.

GTP-4o in this version became the embodiment of corporate enshitification. Being safe and not skipping on empty praises are certainly part of that.

Some questioned if AI can really do art. But it became art itself, like some zen cookie rising to godhood.