Comment by falcor84
18 days ago
Hear, hear!
There has to be a better way about it. As I see it, to be productive, AI agents have to be able to talk about politics, because at the end of the day politics are everywhere. So following up on what they do already, they'll have to define a model's political stance (whatever it is), and to have it hold its ground, voicing an opinion or abstaining from voicing an opinion, but continuing the conversation, as a person would (at least as those of us who don't rage-quit a conversation when they hear something slightly controversial).
Indeed, you can facilitate talking politics without having a set opinion.
It's a fine line, but it is something the BBC managed to do for a very long time. The BBC does not itself present an opinion on Politics yet facilitates political discussion through shows like Newsnight and The Daily Politics (rip).
BBC is great at talking about the Gaza situation. Makes it seem like people are just dying from natural causes all the time.
Australia's ABC makes it fairly clear who is killing who but also manages to avoid taking sides.
There aren't many mono-cultures as strong as silicon valley politics. Where this intersects with my beliefs I love it, but where it doesn't it is maddening. I suspect that's how most people feel.
But anyway, when one is rarely or never challenged on their beliefs, they become rusty. Do you trust them to do a good job training their own views into the model, let alone training in the views of someone on the opposite side of the spectrum?
I don't know if I trust them as such, but they're doing it anyway, so I'd appreciate it being more explicit.
Also, as long as it's not training the whole model on the fly as with the Tay fiasco, I'd actually be quite interested in an LLM that would debate you and possibly be convinced and change its stance for the rest of that conversation with you. "Strong opinions weakly held" and all.