← Back to context

Comment by nubela

8 days ago

There is this bias problem not just with ChatGPT, but with LLMs in general. It is not able to be objective. For example, if you paste arguments from 2 lawyers, for which lawyer A uses very strong words and writes a lot more VS that of lawyer B, which has a strong case but says less. LLMs in general will always be biased and err towards the side which uses stronger language and write a lot more.

This to me, is a sign that intelligence/rationalization is not present yet. That said, it does seem like something that can be "trained" away.