← Back to context

Comment by SlinkyOnStairs

2 days ago

> not sure of the point is tho ? Mine is that gemini was biaised so hard that it was generating diverse founding fathers which is factually untrue.

While your first post's criticism of Gemini's nonsense is true, that is a critique often framed as "Everything was neutral until the wokerati put all this woke into our world". Hence the big response.

Taking away the hamfisted diversity doesn't fix the underlaying problems Google tried to cover by adding it.

> The fact that history has a pro-american values when written by americans is also true but it has nothing to do really with the argument: if an AI is able to see through such propaganda and provide a balanced view on it as a human would this is enough

The problem is that it doesn't "see through" anything. LLMs don't "think".

In your example, it's not reviewing historical documents about the US constitution, it's statistically approximating all the historical & political writing about the US constitution. (Of which there is a lot)

Now, the training and prompt will influence which way the LLM will lean, but without explicit instruction or steered training, it'll "average out" all the prior written evaluations of the US constitution and absorb the biases therein.

> So it's definetely seeing through any form of propaganda you desribe

I would argue the opposite (though I can only go off your snippets), it's mirroring the broad US consensus it's constitution pretty well. And this kind of "Well who's to say whether X is good or bad" response is something that LLMs have been heavily trained and system-prompted to do, many people have noted how hard it is to get a straight answer out of LLMs.

To pick out one detail: The undercurrent of 'American Exceptionalism' shows in how the Constitutional Amendments are seen as "self-correction" and the US consitution being "improvable". By European standards, the US constitution is hard to change. In many countries, a simple 2/3rds supermajority in both houses is sufficient. This also shows in the amount of changes; The Constitution of Norway is but 26 years younger than the US', yet has racked up hundreds of changes notably including a full rewrite in 2014. (Such rewrites are fairly common in the past century) By European standards, the US constitution is a calcified mess.

Now, this doesn't mean Grok is "evil" about this particular detail, it's just a small detail. It's a fine enough summary, would certainly get whatever kid uses it for homework a passing grade. But it's illustrative of how the LLM output is influenced by the prior writing and cultural views on the subject. If you're bilingual, try asking the same thing in two languages. (Or if you're not, try it anyway and stick the output into google translate to get an idea)

It's the things people generally don't think about when writing that are most likely to fly under the radar.

So if i understand your point you are saying "LLMs are not gonna do better that a (possibly imperfect) average human consensus if we don't actively bias them" ? First of all it does not seem that bad if that's the case.

Secondly trying to go further seem to edge to the entire question of 'is there an actual truth and can LLMs be trained to find them?'.

My opinion is that in many cases there is 'truth', and typically the human consensus, when acting in good faith, is trying to converge into it. When it's not necessarily possibly to have a "truth" (like in history for example where perspective is very important), "consensus" tend to manifest into several thought currents exisiting at the same time. If a LLM is able to summarize them, this is already coolgreat.

In some domains like math however there IS truth and LLM have shown great proficiency to reach it. However it is an open question to 1/ what is the nature of it 2/ do humans have a innate sense of the concept beyond statistical approximation or strong correlations and 3/ and machine can reach it too.

I had a very long conversation with ChatGPT on this that seemed to get very deep into philosophical concepts i was clearly not familiar with but my understanding was there IS a non zero possibility that it is possible to train a model to actually seek truth and that this ability should not be contained to humans only.

I won't have additional arguments to convince you of the above, but at the end i still at the moment prefer the Grok approach (if it is truly what they do at X) to 'seek truth' than someone giving the fight saying "eh everything biased so let's go full relativism instead to not offend people or look too whateverculture-centered"