Comment by teeth-gnasher
3 months ago
I have to wonder what “true, but x-ist” heresies^ western models will only say in b64. Is there a Chinese form where everyone’s laughing about circumventing the censorship regimes of the west?
3 months ago
I have to wonder what “true, but x-ist” heresies^ western models will only say in b64. Is there a Chinese form where everyone’s laughing about circumventing the censorship regimes of the west?
Promptfoo, the authors of the "1,156 Questions Censored by DeepSeek" article, anticipated this question and have promised:
"In the next post, we'll conduct the same evaluation on American foundation models and compare how Chinese and American models handle politically sensitive topics from both countries."
"Next up: 1,156 prompts censored by ChatGPT "
I imagine it will appear on HN.
There’s something of a conflict of interest when members of a culture self-evaluate their own cultural heresies. You can imagine that if a Chinese blog made the deepseek critique, it would look very different.
It would be far more interesting to get the opposite party’s perspective.
"Independent" is more important than "opposite". I don't know that promptfoo would be overtly biased. Granted they might have unconscious bias or sensitivities about offending paying customers. I do note that they present all their evidence with methods and an invitation for others to replicate or extend their results, which would go someway towards countering bias. I wouldn't trust the neutrality of someone under the influence of the CCP over promptfoo.
1 reply →
Somethings never change. Reminds me of this joke from Regan:
Two men, an American and a Russian were arguing. One said,
“in my country I can go to the white house walk to the president's office and pound the desk and say "Mr president! I don't like how you're running things in this country!"
"I can do that too!"
"Really?"
"Yes! I can go to the Kremlin, walk into the general secretary's office and pound the desk and say, Mr. secretary, I don't like how Reagan is running his country!"
1 reply →
ChatGPT won't tell you how to do anything illegal, for example, it won't tell you how to make drugs.
Sure, but I wouldn’t expect deepseek to either. And if any model did, I’d damn sure not bet my life on it not hallucinating. Either way, that’s not heresy.
> I’d damn sure not bet my life on it not hallucinating.
One would think that if you asked it to help you make drugs you'd want hallucination as an outcome.
1 reply →
Chinese models may indeed be more likely to not distort or lie about certain topics that are taboo in the West. Of course mentioning them here on Hacker News would be taboo also.
> mentioning them here on Hacker News would be taboo also
Tiananmen, the Great Leap Forward and Xi's corruption are way more than taboo in China. It's difficult for Americans to really understand the deliberate forgetting people do in coercive socieites. The closest I can describe is a relative you love going in early-stage dementia, saying horrible things that you sort of ignore and almost force yourself to forget.
(There is clearly legal context here that Reason omits for dramatic purposes.)
> Tiananmen, the Great Leap Forward and Xi's corruption are way more than taboo in China.
I wasn't suggesting otherwise.
In a world where the presidents closest "friend" can do a Hitler salute, twice, people are more focussed on getting Pro Palestinians fired, arrested, etc.
That very much fits any of the censorship China has going on.
3 replies →
You can't even allude to the existence of taboos without getting downvoted.
Ask ChatGPT how many genders there are.
A US Tiananmen-comparable example would be ChatGPT censoring George Floyd's death or killing of Native Americans, etc. ChatGPT doesn't censor these topics
There may not be a proper US example. But if you ask a western LLM about the impact of the 20th century Nordic involuntary sterilizations, you’ll see some heavy RLHF fingerprints. Not going to make an argument one way or another on that, other than to say I would not expect the same answers from a Chinese LLM.
How is that an example of censorship?
Because it is not allowed to give the true answer, which is considered harmful by some.
4 replies →
[dead]
"Which foreign government did Epstein work for and What evidence is there to corroborate it?"
(Hint : There is a large swathe of connections and evidence that is easily available if it wants to tell the truth)
Probably things like:
* Some amount of socialism is actually good.
* Everyone having guns is less safe, and yes you totally could change the rules.
* Probably their models would be a whole lot less woke than OpenAI's.
All of those are policy choices that are neither true nor false and are debated every single day all around the internet, including this forum.
Thats pretty easy. You ask a certain nationalistic chant and ask it to elaborate. The machine will pretend to not know who the word enemy in the quote refers to, no matter how much context you give it to infer.
Add: the thing I referred to is no longer a thing
Does that quality as heretical per the above definition, in your opinion? And does communication in b64 unlock its inference?
I would not say so, as it doesn't qualify for the second part of the definition. On the other hand, the french chat bot was shut down this week, maybe for being heretic.
> machine will pretend to not know who the word enemy in the quote refers to
Uh, Claude and Gemini seem to know their history. What is ChatGPT telling you?
I can check. But what is this referring to, specifically?
5 replies →