← Back to context

Comment by dev0p

3 months ago

As an engineer, I need AIs to tell me when something is wrong or outright stupid. I'm not seeking validation, I want solutions that work. 4o was unusable because of this, very glad to see OpenAI walk back on it and recognise their mistake.

Hopefully they learned from this and won't repeat the same errors, especially considering the devastating effects of unleashing THE yes-man on people who do not have the mental capacity to understand that the AI is programmed to always agree with whatever they're saying, regardless of how insane it is. Oh, you plan to kill your girlfriend because the voices tell you she's cheating on you? What a genius idea! You're absolutely right! Here's how to ....

It's a recipe for disaster. Please don't do that again.

Another way to say this is truth matters and should have primacy over e.g. agreeability.

Anthropic used to talk about constitutional AI. Wonder if that work is relevant here.

  • Alas, we live in a post-truth world. Many are pissed at how the models are "left leaning" for daring to claim climate change is real, or that vaccines don't cause autism.

    • ChatGPT tone 2-3 years ago was much more aligned with the "truth exists" world. I'd like to get it back please.

I hear you. When a pattern of agreement is all to often observed on the output level, you’re either seeing yourself on some level of ingenuity or hopefully if aware enough, you sense it and tell the AI to ease up. I love adding in "don’t tell me what I want to hear" every now and then. Oh, it gets honest.