← Back to context

Comment by gpm

8 hours ago

Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it.

Those are false equivalents. If a technology can’t reliably sort out what is a trustworthy source and filter out the rest than it’s not a truth worthy technology. There are tools after all. I should be able to trust a hammer if I use it correctly

All this is also missing the other point: this proves that the narrative companies are selling about AI are not based on objective capabilities

  • The claim here isn't that the technology can't, but that the people using it chose to use it to not. Equivalent to the person with a hammer who chose to smash the 2x4 into pieces instead of driving a nail into it.

    • The claim here is that it can’t because it want filter its own garbage let alone other garbage.

      The narrative being pushed boils down to LLMs and AI systems being seen as reliable. The fact that Google AI can’t even tag YouTube videos as unreliable sources and filter them out of the result set before analysis is telling