Comment by gdevenyi
4 months ago
> They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently
US models have no bias sir /s
4 months ago
> They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently
US models have no bias sir /s
Hardly the same thing. Ask Gemini or OpenAI's models what happened on January 6, and they'll tell you. Ask DeepSeek what happened at Tiananmen Square and it won't, at least not without a lot of prompt hacking.
Ask it if Israel is an apartheid state, that's a much better example.
GPT5:
Do you have a problem with that? I don't.
3 replies →
Ask Grok to generate an image of bald Zelensky: it does execute.
Ask Grok to generate an image of bald Trump: it goes on with an ocean of excuses on why the task is too hard.
FWIW, I can't reproduce this example - it generates both images fine: https://ibb.co/NdYx1R4p
1 reply →
I don't use Grok. Grok answers to someone with his own political biases and motives, many of which I personally disagree with.
And that's OK, because nobody in the government forced him to set it up that way.
1 reply →
Try MS Copilot. That shit will end the conversation if anything remotely political comes up.
As long as it excludes politics in general, without overt partisan bias demanded by the government, what's the problem with that? If they want to focus on other subjects, they get to do that. Other models will provide answers where Copilot doesn't.
Chinese models, conversely, are aligned with explicit, mandatory guardrails to exalt the CCP and socialism in general. Unless you count prohibitions against adult material, drugs, explosives and the like, that is simply not the case with US-based models. Whatever biases they exhibit (like the Grok example someone else posted) are there because that's what their private maintainers want.
1 reply →