Comment by acatton
20 days ago
> “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
Seems rather simple to solve to me.
Just have video cameras (mostly phones these days) record a crypto hash into the video that the video sharing platforms read and display. That way we know a video was recorded with the uploader's camera and not just generated in a computer software.
There aren't that many big tech companies that are responsible for creating the devices people use to record and host the platforms and software that people use to play back the content.