Comment by jhanschoo
6 months ago
> They are not intrinsically truth seekers
Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.
In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.
You are completely missing the argument that was made to underline the claim.
If ChatGPT claims arsenic to be a tasty snack, nothing happens to it.
If I claim the same, and act upon it, I die.
You are right. I have ignored completely the context in the phrasing "truth seeker" was made, given my own wrong interpretation to the phrase, and I in fact agree with the comment I was responding to that they "work with the lens on our reality that is our text output".
If ChatGPT claims arsenic to be a tasty snack, OpenAI adds a p0 eval and snuffs that behavior out of all future generations of ChatGPT. Viewed vaguely in faux genetic terms, the "tasty arsenic gene" has been quickly wiped out of the population, never to return.
Evolution is much less brutal and efficient. To you death matters a lot more than being trained to avoid a response does to ChatGPT, but from the point of view of the "tasty arsenic" behavior, it's the same.
It's difficult to ascertain the interests and intent of people, but I'm even more suspicious and uncertain of the goals of LLMs who literally cannot care.
>Is the average person a truth seeker in this sense that performs truth-seeking behavior?
Absolutely