Comment by davidguetta

2 days ago

So if i understand your point you are saying "LLMs are not gonna do better that a (possibly imperfect) average human consensus if we don't actively bias them" ? First of all it does not seem that bad if that's the case.

Secondly trying to go further seem to edge to the entire question of 'is there an actual truth and can LLMs be trained to find them?'.

My opinion is that in many cases there is 'truth', and typically the human consensus, when acting in good faith, is trying to converge into it. When it's not necessarily possibly to have a "truth" (like in history for example where perspective is very important), "consensus" tend to manifest into several thought currents exisiting at the same time. If a LLM is able to summarize them, this is already coolgreat.

In some domains like math however there IS truth and LLM have shown great proficiency to reach it. However it is an open question to 1/ what is the nature of it 2/ do humans have a innate sense of the concept beyond statistical approximation or strong correlations and 3/ and machine can reach it too.

I had a very long conversation with ChatGPT on this that seemed to get very deep into philosophical concepts i was clearly not familiar with but my understanding was there IS a non zero possibility that it is possible to train a model to actually seek truth and that this ability should not be contained to humans only.

I won't have additional arguments to convince you of the above, but at the end i still at the moment prefer the Grok approach (if it is truly what they do at X) to 'seek truth' than someone giving the fight saying "eh everything biased so let's go full relativism instead to not offend people or look too whateverculture-centered"