← Back to context

Comment by KetoManx64

3 months ago

On the inverse of this, it can also save us from biased content because it can point out all the ways that the article we are reading is trying to manipulate our perspective.

With how inexpensive trainings are starting to get, it will not be long until we can train our own specialized models to fit our specific needs.

> it can also save us from biased content

I am pessimistic on that front, since:

1. If LLM's can't detect biases in their own output, why would we expect them to reliably detect it in documents in general?

2. As a general rule, deploying bias/tricks/fallacies/BS is much easier than the job of detecting them and explaining why it's wrong.