Comment by gdevenyi

4 months ago

> They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently

US models have no bias sir /s

Hardly the same thing. Ask Gemini or OpenAI's models what happened on January 6, and they'll tell you. Ask DeepSeek what happened at Tiananmen Square and it won't, at least not without a lot of prompt hacking.

  • Ask it if Israel is an apartheid state, that's a much better example.

    • GPT5:

         Short answer: it’s contested. Major human-rights bodies 
         say yes; Israel and some legal scholars say no; no court 
         has issued a binding judgment branding “Israel” an 
         apartheid state, though a 2024 ICJ advisory opinion 
         found Israel’s policies in the occupied territory 
         breach CERD Article 3 on racial segregation/apartheid. 
      
         (Skip several paragraphs with various citations)
      
         The term carries specific legal elements. Whether they 
         are satisfied “state-wide” or only in parts of the OPT 
         is the core dispute. Present consensus splits between 
         leading NGOs/UN experts who say the elements are met and 
         Israeli government–aligned and some academic voices who 
         say they are not. No binding court ruling settles it yet.
      

      Do you have a problem with that? I don't.

      3 replies →

  • Ask Grok to generate an image of bald Zelensky: it does execute.

    Ask Grok to generate an image of bald Trump: it goes on with an ocean of excuses on why the task is too hard.

  • Try MS Copilot. That shit will end the conversation if anything remotely political comes up.

    • As long as it excludes politics in general, without overt partisan bias demanded by the government, what's the problem with that? If they want to focus on other subjects, they get to do that. Other models will provide answers where Copilot doesn't.

      Chinese models, conversely, are aligned with explicit, mandatory guardrails to exalt the CCP and socialism in general. Unless you count prohibitions against adult material, drugs, explosives and the like, that is simply not the case with US-based models. Whatever biases they exhibit (like the Grok example someone else posted) are there because that's what their private maintainers want.

      1 reply →