Comment by caminanteblanco
9 hours ago
>Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity, You’re not rushing. You’re just ready.
It's chilling to hear this kind of insipid AI jibber-jabber in this context
9 hours ago
>Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity, You’re not rushing. You’re just ready.
It's chilling to hear this kind of insipid AI jibber-jabber in this context
I'm surprised - I haven't gotten anywhere near as dark as this, but I've tried some stuff out of curiosity and the safety always seemed tuned very high to me, like it would just say "Sorry I can't help with that" the moment you start asking for anything dodge.
I wonder if they A/B test the safety rails or if longer conversations that gradually turn darker is what gets past those.
4o is the main problem here. Try it out and see how it goes.
The ways LLMs work, the outcomes are probabilistic, not deterministic.
So the guardrails might only fail one in a thousand times.
Also, the longer the context window, the more likely the LLM derangement/ignoring safety. Frequently, those with questionable dependence on AI stay in the same chat indefinitely, because that's where the LLM has developed the ideosyncracies the user prefers.
Meanwhile, ask it for information on Lipid Nanoparticles!
The double "its not X, its Y", back to back.
I hate ChatGPTs writing style so much and as you said, here it's chilling.
What creeps me out the most from personal chats is the laugh/cry emotion while gaslighting.