Comment by bigyabai

1 day ago

> they can take them out

Basic informatics says this is objectively impossible. Every human language is pre-baked with it's own political biases. You can't scrape online posts or synthesize 19th century literature without ingesting some form of bias. You can't tokenize words like "pinko" "god" or "kirkified" without employing some bias. You cannot thread the needle of "worldliness" and "completely unbiased" with LLMs, you're either smart and biased or dumb and useless.

I judge models on how well they code. I can use Wikipedia to learn about Chinese protests, but not to write code. Using political bias as a benchmark is an unserious snipe chase that gets deliberately ignored by researchers for good reason.