← Back to context

Comment by spooky_deep

1 day ago

They already are?

All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.

> Then millions of people use the output uncritically.

Or critically, but it's still an input or viewpoint to consider

Research shows that if you come across something often enough, you're going to be biased towards it even if the message literally says that the information you just saw is false. I'm not sure which study that was exactly but this seems to be at least related: https://en.wikipedia.org/wiki/Illusory_truth_effect

Our previous information was coming through search engines. It seems way easier to filter search engine results than to fine tune models.

  • the way people treat Llms these days is that they assign a lot more trust into their output than to random Internet sotes