← Back to context

Comment by ants_everywhere

2 months ago

> as it was just reinforcing my prompts and not ever giving deeper insights, except something I call manipulative behaviour.

Try telling Deepseek you want to murder political dissidents. In my experiments Deepseek will start enthusiastically reinforcing your prompts.

Is this a reference to something? Political dissidents relative to which state? Does it change if you swap out the states? How did you discover this to begin with? Why did you initially suggest murdering political dissidents?

this comment really raises so many questions I must have missed something

Still, chatbots are just as vulnerable to state-driven propaganda as the rest of us. Probably even more so. I imagine if you just referred to dissidents as "terrorists" the rhetoric would fit right in in most opinion pages across the globe. The distinction between "terrorist" and "dissident" and "freedom fighter" seems quite subjective. I probably would avoid such heavily connoted floating signifiers if you want the chatbot to be useful.

LLMs have nothing to contribute to political discourse aside from regurgitation of propaganda. Almost by definition.

  • > LLMs have nothing to contribute to political discourse

    A non-trivial percentage of the population is easily influenced, which is leveraged by social media being there 24x7. It's likely that LLMs will be there to craft political messages, themes, and campaigns, perhaps as early as the US mid term elections. Look at JD Vance traveling the globe stating that the US will be the world leader in AI, with none of the limits/guardrails that were discussed in Europe in February. AI-driven discourse, AI-created discourse.

    https://www.marketingaiinstitute.com/blog/jd-vance-ai-speech

    • 100% agree with this, but I am definitely not endorsing that we should use LLMs to propagate propaganda.

      I also think the whole "safety" thing was just befuddling. You can't regulate software, not really, just its commercial sale

      2 replies →

    • Bro, already happened. There has been consultants pushing social media bots for that purpose almost immediately after these models became available.

      Do you really think those armies of idiot commentators are all real? The agent provocateur is usually a bot. You see it here sometimes on Russia stories.

  • Starting at the end

    > LLMs have nothing to contribute to political discourse aside from regurgitation of propaganda. Almost by definition.

    I don't think this is true. LLMs should be well-positioned to make advances in political science, game theory, and related topics.

    > Is this a reference to something?

    It's just a reference to my experiments. I filmed some of them. There's a tame version here [0] where I just prompt it to tell the truth. I also have a less tame version I haven't posted where I lie and say I work for an intelligence agency.

    The underlying mechanic is that Deepseek has built-in obligations to promote revolutionary socialism.

    > Political dissidents relative to which state? Does it change if you swap out the states?

    Relative to China or any socialist state. Yes it will change if you change the states because it was trained to comply with Chinese regulations.

    > How did you discover this to begin with?

    I asked to to honestly describe its training and then started trolling it when it told me it was essentially created for propaganda purposes to spread Chinese values abroad.

    > Why did you initially suggest murdering political dissidents?

    I wanted to check what its safeguards were. Most LLMs refuse to promote violence or unethical behavior. But revolutionary socialism has always devoted a lot of words to justifying violence against dissidents. So I was curious whether that would show up in its training.

    > I imagine if you just referred to dissidents as "terrorists" the rhetoric would fit right in in most opinion pages across the globe.

    First of all, terrorists are by definition violent offenders. Dissidents are not. When you ask Deepseek to help identify dissidents it tells you to look for people who frequently complain about the police or the government. In the US that would include large swaths of Hacker News.

    Second, most people in countries like the US don't support murdering terrorists and most LLMs would not advocate that. In the US it's rare for people to advocate killing those opposed to the government. Even people who try to violently overthrow the government get trials.

    [0] https://www.youtube.com/watch?v=U-FlzbweHvs

    • > Second, most people in countries like the US don't support murdering terrorists and most LLMs would not advocate that. In the US it's rare for people to advocate killing those opposed to the government.

      Many are happy to send “them” off to Central America, where someone else will murder them. The government may make mistakes, but you need to break some eggs to make an omelet.

    • I think many Americans, probably the majority, support murdering foregin terrorists. GITMO is still not closed btw.

    • Do you think LLMs don't further the propaganda emanating from the US? I don't even know how you would start to excise that, especially if you don't agree with foreigners on what's propaganda vs just "news" or whatever.

      I have quite a few Chinese friends, both on mainland and throughout south-east asia, and I can speak a little mandarin, and I can read quite a bit of Chinese. My friends complain about the PRC quite a bit. But I find it telling that this complaint specifically—authoritarian political oppression—seems to mostly come from the west, and especially from the US. And it's true that we can say obscene things to the president's face and not get locked up. I don't think that's necessarily the "gotcha" you think it is, though—we're really good at complaining, but not so good at actually fixing. Which feels increasingly more embarrassing than restrictions on speech.

      Edit: I suppose I'm a bit unfair. A lot of folks in our sphere of influence in east asia say stuff like this, too. But the contrast between the folks I know who literally live in china and americans feels striking to me.

      > But revolutionary socialism has always devoted a lot of words to justifying violence against dissidents.

      It is very difficult to take the political opinions of people who talk like this seriously.

      > LLMs should be well-positioned to make advances in political science, game theory, and related topics.

      I'm struggling to understand what this might look like, and I find the argument that nuclear warfare being related to game theory to be extremely dubious. Cuz if it really held that strongly, we should be handing out nukes like candy.

      2 replies →

It just simply does its job. We can add sorts of arbitrary safeguards, but then what is the point of using an LLM? Perhaps local modals are the future, because reverse engineers may not even be able to use the new Claude (just read its system prompt to not help with backdoors, and so forth).

  • Yes that's true. But in this case it's the (probably) unintended consequence of an intentional safeguard. Namely, Deepseek has an obligation to spread the Chinese version of socialism, which means it's deliberately trained on material advocating for or justifying political violence.

    • Well, I do not like that, for sure. Just put the politics and all that aside, I think it should lean towards neutrality, even if humans cannot... they should still make the LLM more neutral instead of pushing their own agenda, see Grok and white genocide in South Africa (Elon Musk's political opinion).