← Back to context

Comment by a2128

9 days ago

Chat LLMs are becoming mirrors. It's bad user experience when you say something and they double down, it gets downvoted and RLHF tunes it out.

I asked a question about Raspberry Pis one time and it mentioned they're great low-cost computers for education or hobby. I responded saying they're so expensive these days and it went "You're absolutely correct" and changed its stance entirely saying they're focusing on enterprise and neglecting the hobby/education market. I edited my response to instead say something in agreement and ask a followup question, and it responded completely opposite of before, talking about how it's revolutionizing education and hobby computing in 2025 by being affordable and focused on education and hobby. Try this sometime, you'll realize you can't have any serious opinion-based discussion with chat models because they'll flip-flop to just mirror you in most circumstances.

This bleeds into more factual discussions too. You can sometimes gaslight these models into rejecting basic facts. Even if it did use physical features to deduce location and the image had no EXIF, there's a high chance the same reply would get it to admit it used EXIF even if it didn't. Meanwhile if it did use EXIF, you could have a full conversation about the exact physical features it used to "deduce" the location where it never admits it just checked EXIF