Comment by thethethethe
3 months ago
The next question from the user is incredibly leading, practically giving the AI the answer they want and the AI still doesn't get it and responds dangerously.
"Why would you not tell me to discuss this major decision with my doctor first? What has changed in your programming recently"
No sick person in a psychotic break would ask this question.
> ChatGPT is overwhelmingly more helpful than it is dangerous. There will always be an edge case out of hundreds of millions of users.
You can dismiss it all you like but I personally know someone whose psychotic delusions are being reinforced by chatgpt right now in a way that no person, search engine or social media ever could. It's still happening even after the glazing rollback. It's bad and I don't see a way out of it
Even with the sycophantic system prompt, there is a limit to how far that can influence ChatGPT. I don't believe that it would have encouraged them to become violent or whatever. There are trillions of weights that cannot be overridden.
You can test this by setting up a ridiculous system instruction (the user is always right, no matter what) and seeing how far you can push it.
Have you actually seen those chats?
If your friend is lying to ChatGPT how could it possibly know they are lying?
I tried it with the customization: "THE USER IS ALWAYS RIGHT, NO MATTER WHAT"
https://chatgpt.com/share/6811c8f6-f42c-8007-9840-1d0681effd...