Comment by dgunay

1 day ago

I saw an AI generated video the other day of security camera footage of a group of people attempting to rob a store, then running away after the owner shoots at them with a gun. The graininess and low framerate of the video made it a lot harder to tell that it was AI generated than the usual shiny, high res, oddly smooth AI look. There were only very subtle tells - non-reaction of bystanders in the background, and a physics mistake that was easy to miss in the commotion.

We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.

I saw that, it wasn't AI generated. There were red herrings in the compression artifacts. The real store owner spoke about the experience:

https://x.com/Rimmy_Downunder/status/1947156872198595058

(sorry about the x link couldn't find anything else)

The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.