Comment by no_wizard
13 hours ago
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
Using it as a reference is a high bar not a low bar.
The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.
Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.
Not that this makes it any better, but a lot of AI videos on YouTube are published with no specific intent beyond capturing ad revenue - they're not meant to deceive, just to make money.
Not just youtube either. With meta & tiktok paying out for "engagement" that means all forms of engagement is good to the creator, not just positive engagement, so these companies are directly encouraging "rage bait" type content and pure propaganda and misinformation because it gets people interacting with the content.
There's no incentive to produce anything of value outside of "whatever will get me the most clicks/like/views/engagement"
One type of deception, conspiracy content, is able to sell products on the basis that the rest of the world is wrong or hiding something from you, and only the demagogue knows the truth.
Anti-vax quacks rely on this tactic in particular. The reason they attack vaccines is that they are so profoundly effective and universally recognized that to believe otherwise effectively isolates the follower from the vast majority of healthcare professionals, forcing trust and dependency on the demagogue for all their health needs. Mercola built his supplement business on this concept.
The more widespread the idea they’re attacking the more isolating (and hence stickier) the theory. This might be why flat earthers are so dogmatic.
> Not trusting an AI told to lie to you is different than not trusting an AI
The entire foundation of trust is that I’m not being lied to. I fail to see a difference. If they are lying, they can’t be trusted
Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it.
2 replies →