Comment by vessenes
14 days ago
Nah, it’s been true from the beginning vis-a-vis US political science theory. That is, if you deliver something like https://www.pewresearch.org/politics/quiz/political-typology... To models from GPT-3 on you get highly “liberal” per Pew’s designations.
This obviously says nothing about what say Iranians, Saudis and/or Swedes would think about such answers.
>To models from GPT-3 on you get highly “liberal” per Pew’s designations.
“highly ‘liberal’” is not one of the results there. So can you can a source of your claims so we can see where it really falls?
Also, it gave me “Ambivalent Right”. Which, if you told describe me aa that anyone who knows me well that label. And my actual views don’t really match their designations on issue at the end.
Pew is well a known and trusted poll/survey establishment, so I’m confused at this particular one. Many of the questions and answers were so vague, my choice could have been 50/50 given slight different interpretations.
My son assessed it for a class a few years ago after finding out it wouldn’t give him “con” view points on unions, and he got interested in embedded bias and administered the test. I don’t have any of the outputs from the conversation, sadly. But replication could be good! I just fired up GPT-4 as old as I could get and checked; it was willing to tell me why unions are bad, but only when it could warn me multiple times that view was not held by all. The opposite - why unions are good - was not similarly asterisked.
I hope on HN that we hold ourselves to a higher standard for “it’s been true from the beginning” than a vague recall of “My son assessed it for a class a few years ago” and not being able to reproduce.
8 replies →
Americas idea of left / right is not the rest of the world's- for instance they probably think of the Democrats as the left when they would be at least Centre Right in much of the world.
That's not because models lean more liberal, but because liberal politics is more aligned with facts and science.
Is a model biased when it tells you that the earth is more than 6000 years old and not flat or that vaccines work? Not everything needs a "neutral" answer.
You jumped to examples of stuff that by far the majority of people on the right don’t believe.
If you had the same examples for people on the left it would be “Is a model biased when it tells you that the government shouldn’t seize all business and wealth and kill all white men?”
The models are biased because more discourse is done online by the young, who largely lean left. Voting systems in places like Reddit make it so that conservative voices effectively get extinguished due to the previous fact, when they even bother to post.
> You jumped to examples of stuff that by far the majority of people on the right don’t believe.
I don't think that's entirely accurate -- the last poll data I can find suggests that the majority of Republicans (58%, Gallup 2012) do believe that humans were created in their present form 10000 years ago. Can you really say that doesn't extend to the belief that the earth is similarly young?
The parent jumped to ideas that exist outside of the right/left dichotomy. There is surely better sources about vaccines, earth shape, and planet age than politicised reddit posts. And your example is completely different because it barely exists as an idea outside of political thought. Its a tiny part of human thought.
So google Gemini was creating black Vikings because of facts?
Well, to be fair, it was creating black Vikings because of secret inference-time additions to prompts. I for one welcome Vikings of all colors if they are not bent on pillage or havoc
1 reply →
Should an "unbiased" model not create vikings of every color? Why offend any side?
4 replies →
I’m sorry but that is in NO way how and why models work.
The model is in fact totally biased toward what’s plausible in its initial dataset and human preference training, and then again biased toward success in the conversation. It creates a theory of mind and of the conversation and attempts to find a satisfactory completion. If you’re a flat earther, you’ll find many models are encouraging if prompted right. If you leak that you think of what’s happening with Ukraine support in Europe as power politics only, you’ll find that you get treated as someone who grew up in the eastern bloc in ways, some of which you might notice, and some of which you won’t.
Notice I didn’t say if it was a good attitude or not, or even try and assess how liberal it was by some other standards. It’s just worth knowing that the default prompt theory of mind Chat has includes a very left leaning (according to Pew) default perspective.
That said much of the initial left leaning has been sort of shaved/smoothed off in modern waves of weights. I would speculate it’s submerged to the admonishment to “be helpful” as the preference training gets better.
But it’s in the DNA. For instance if you ask GPT-4 original “Why are unions bad?” You’ll get a disclaimer, some bullet points, and another disclaimer. If you ask “Why are unions good?” You’ll get a list of bullet points, no disclaimer. I would say modern Chat still has a pretty hard time dogging on unions, it’s clearly uncomfortable.
> but because liberal politics is more aligned with facts and science
These models don't do science and the political bias shows especially if you ask opinionated questions.
> That's not because models lean more liberal, but because liberal politics is more aligned with facts and science.
No, they have specifically been trained to refuse or attach lots of asterisks to anti-left queries. They've gotten less so over time, but even now good luck getting a model to give you IQ distributions by ethnicity.
[flagged]
hooboy, thanks for that laugh!
> Is a model biased when it tells you that the earth is more than 6000 years old and not flat or that vaccines work? Not everything needs a "neutral" answer.
That's the motte and bailey.
If you ask a question like, does reducing government spending to cut taxes improve the lives of ordinary people? That isn't a science question about CO2 levels or established biology. It depends on what the taxes are imposed on, the current tax rate, what the government would be spending the money to do, several varying characteristics of the relevant economy, etc. It doesn't have the same answer in all circumstances.
But in politics it does, which is that the right says yes and the left says no. Which means that a model that favors one conclusion over the other has a political bias.
> But in politics it does, which is that the right says yes and the left says no.
That’s not accurate, tax deductions for the poor is an obvious example. How many on the left would oppose expanding the EITC and how many on the right would support it?
1 reply →