← Back to context

Comment by gpm

9 hours ago

Using it as a reference is a high bar not a low bar.

The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.

Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.

Not that this makes it any better, but a lot of AI videos on YouTube are published with no specific intent beyond capturing ad revenue - they're not meant to deceive, just to make money.

  • Not just youtube either. With meta & tiktok paying out for "engagement" that means all forms of engagement is good to the creator, not just positive engagement, so these companies are directly encouraging "rage bait" type content and pure propaganda and misinformation because it gets people interacting with the content.

    There's no incentive to produce anything of value outside of "whatever will get me the most clicks/like/views/engagement"

  • One type of deception, conspiracy content, is able to sell products on the basis that the rest of the world is wrong or hiding something from you, and only the demagogue knows the truth.

    Anti-vax quacks rely on this tactic in particular. The reason they attack vaccines is that they are so profoundly effective and universally recognized that to believe otherwise effectively isolates the follower from the vast majority of healthcare professionals, forcing trust and dependency on the demagogue for all their health needs. Mercola built his supplement business on this concept.

    The more widespread the idea they’re attacking the more isolating (and hence stickier) the theory. This might be why flat earthers are so dogmatic.

> Not trusting an AI told to lie to you is different than not trusting an AI

The entire foundation of trust is that I’m not being lied to. I fail to see a difference. If they are lying, they can’t be trusted

  • Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it.

    • Those are false equivalents. If a technology can’t reliably sort out what is a trustworthy source and filter out the rest than it’s not a truth worthy technology. There are tools after all. I should be able to trust a hammer if I use it correctly

      All this is also missing the other point: this proves that the narrative companies are selling about AI are not based on objective capabilities

      1 reply →