Fake images that fooled the world

8 days ago (theguardian.com)

Thought it might be a fun exercise to see how little time it would take to create similar approximations of the original deepfakes using GenAI models.

https://mordenstar.com/blog/historic-deepfakes-with-ai

It was actually just yesterday when I decided youtube shorts are no longer a fun way to kill time. There's a lot of amazing stuff to watch, but it's no fun any more, because anything you see that seems amazing is likely to be AI Generated which, for me, ruins it. You're not watching videos of reality, you're basically looking at digital art at this point.

Photos, Videos, and Audio are longer "proof" of anything. Any 10 year old kid can generate basically anything he wants. I love AI, but it's sad to be living in a world where now 'Authenticity' itself is permanently dead.

  • > It was actually just yesterday when I decided youtube shorts are no longer a fun way to kill time. There's a lot of amazing stuff to watch, but it's no fun any more, because anything you see that seems amazing is likely to be AI Generated which, for me, ruins it.

    And the ones that aren't AI generated are badly clipped scenes from movies / TV shows with the same 5 royalty-free songs playing over them, which might as well have been produced by AI.

What qualifies the Jennifer in Paradise photo being in there? That photo reportedly is real, even according to the description given.

It was used as a demonstration photo in a famous photo-editing program which was used to fool the world, but the image is ostensibly a real photo, not a fake image.

  • Nothing. Nothing qualifies several of them; the photo of Filippa Hamilton is noted in the blurb as immediately drawing ridicule from the public.

    Or take this description of the edited image of Elvis:

    > the United Press agency decided to create a mock-up of what the king of rock’n’roll might look like with the typical GI hairstyle, retouching a photo of the singer to remove his quiff (and leaving him with a somewhat disfigured head). “Not all manipulated photographs are intended to deceive,” notes Mia Fineman, a curator at New York’s Metropolitan Museum of Art

    Only the headline says "images that fooled the world"; the article is about something different.

    • you can be fooled by something without anyone intending to deceive you, if people believed that is actually what Elvis was going to look like they would have been "fooled" whether anyone had that intention.

  • The article is a fine example of empty journalism, not intended to inform, just entertain, and then lightly. No effort to be definitive or authoritative, their choices expose casual effort, it is really just "hey! Here's something interesting."

    I'm a scholar in this area, and that article is shit.

    • It's an entertainment article, and doesn't make any claim to be anything else. I'm not sure what you would prefer in its place; can you give any examples of what you would consider to be a suitable treatment of this topic?

      1 reply →

I agree photo manipulation has always happened, to various degrees of perfection, since the dawn of photography.

I suppose the real difference is that before it took a more artisanal, time-consuming process, and now -- increasingly -- it takes far less time to create something convincing enough. Same with video: you could fake a video, do editing, etc, but it took time, skill, a location where to shoot, etc. Now it's becoming easier to do for everyone. And it's not perfect yet, but are we sure it won't get there? And it doesn't have to be perfect anyway, it just has to fool most people in a given window of time.

A great example that underscores the ordinariness of AI. It's a tool and tools can be used for good/bad/neither and inbetween.

Fake pics have existed since pics existed pretty much.

Kids have been looking for ways to cheat on tests since tests began. If you're a teacher, you're gonna have to test in person.

Fake phone calls, fake other things... yea they're of a different/better quality as the technology has gotten better. Is it so fundamental a shift that nothing can be done? I'm not convinced.

  • The ease of cheating/creating fakes surely influences how much cheating/fakes are in circulation, and while we can tolerate a little, excessive amounts will be disruptive. So many technologies moved from obscure curiosities to mass adoption just because somebody made them easier/cheaper to use.

    If at some point the cheats/fakes will be cheaper and easier than the real thing, you can bet that will be a fundamental shift in how we approach the world.

    • > If at some point the cheats/fakes will be cheaper and easier than the real thing

      Is there any evidence this is going to ever happen? The evidence I see points in opposite direction - everyone has so many sensors and data being recorded about the real world that it is actually harder to fake things.

      For example, there used to be a widespread belief in aliens and animal cryptids in the 70s. Today, less so, because people capture everyday reality on much bigger scale than they used to.

      1 reply →

    • It's not only the excess, it's the ease of access. Kids can produce lewd pics of class mates, and make their lives hell. This technology is fundamentally evil.

      4 replies →

  • What good can it be used for? Because I haven't seen anything that makes faking pics with AI so good we can ignore the negatives.

    The article also seems to take the relativist stance: nothing new to see here, move along now. Why? For the clicks? Just being contrarian?

    • Many manifestations of generative AI allow people to put concepts onto screens faster. It generally serves as a more efficient translator of "I want a contract like this one but more tailored to [new client]" or "I want to make a strategy for my [new business]."

      In information economy jobs, translating thoughts and ideas into better formal communications more efficiently is valuable. Be it pictures or text.

      1 reply →

    • The same generation process is also used for... well... generating anything. They are compression functions so you are learning an intractable data distribution (you can't write down the equation) and then turning it into something you have a bit more control over. Images were/are a great test platform for this since we humans can visually identify the outputs and verify that we've accurately learned a good generating function. But this process can be applied to any data and truthfully, variants of it are used all throughout since and have been for decades (arguably at least a century, but statistics really benefited from computers).

      For just the domain of image generation there's still a lot of useful things. Want to do any upscaling? The processes can help there as you're learning a more complex transform than something like a bicubic interpolation (yeah, there are more advanced algorithms, this is just an example). Same is actually true for downsampling. We can even talk about rotating images, which is a classic problem in old videogames. There's also typical photo editing. This is done widely, most notably by Hollywood. Even if your AI only gets you 70% of the way there it can still be helpful (if the first 70% isn't trivial). It is also directly used in compression algorithms. It is much cheaper to share an encoder and decoder structure which can be computed locally and then transmit a smaller signal. The transmission is not only typically the more expensive part but usually also the bottleneck and has the largest chance of data corruption.

      Yeah, I agree, most people are using the tech in weird ways and there's a lot of weird hype around malformed images that are obviously malformed if you looked at it with more than a passing glance (or not through rose colored glasses). But there are a lot of useful applications to this stuff. Ones that could far more benefit the world and personally I'm left wondering "why isn't even a small fraction of the investment that's going into status quo image generators and LLMs going into these other domains?" I'm guessing because image generators and LLms are easier to understand? But it is a shame.

  • So tired of this lazy argument. Projectile murder with bows existed before guns. Guns changed the world. A severe force multiplier for something bad can't automatically be handwaved away.

    • Guns have little use beyond injuring or killing or threatening the same. On the good side: one could argue it's sometimes good to kill for hunting. On the bad side... well there is a lot of suicide, murder, and potential for the same.

      I'm not sure we understand yet how much positive and how much negative potential there is in AI.

      2 replies →

  • > Is it so fundamental a shift that nothing can be done? I'm not convinced.

    A fundamental shift in our complete trust of technology is good. It encourages ignorance and obedience, and alienates people from each other.

    And the fact that AI can be used to fake pictures of your neighbors having sex is nothing but good. No one will be able to say whether any picture is real, so the public won't be able to destroy another young girl's life over it. I also think that arguing about the distribution of pretend movies of your neighbors having sex will have to lead to clear legislation regarding the distribution and sale of personal data.

    • I wish I could be as optimistic as you with regards to human nature. While we may come out the other side with a world that solves the real problems AI will create. I fear millions of people will have their lives destroyed along the way. Half of America thinks “criminals” don’t deserve Due Process. The guilt stems directly from the accusation. In short, people suck.

  • You have to factor in the overall lower barrier of entry (little to no technical skills required, cheap tools easily accessible, etc) paired with distribution capacity on a massive scale at little cost (like you don't need to be featured in a local newspaper to try to picked up by national networks and go "viral").

    You can literally produce fake information at an industrial scale, distribute it in real time, and see what sticks at virtually no cost.

    How do you think we're at the point of breaking the world?

  • People have been killing each other since people exist yet an M30A1 rocket filled with 180k tungsten beads exploding above your city is much more effective than a a dude silex knife. Should we give people military grade weapons ? They're going to kill each others anyways right ? Would you argue they're just the same and not fundamentally different ?

  • > Kids have been looking for ways to cheat on tests since tests began. If you're a teacher, you're gonna have to test in person.

    Access is important. Yes you could hire a scholar to write for you, but that's far more expensive, and detectible by your parents, than asking ChatGPT. Now every student has access to some of the best cheat software on the planet.

The article is interesting, but I think it conflates two things:

"Things that never happened in the real world, and have been either created synthetically or with visual trickery"

- Man jumping into the void.

- Stalin's edited photos (Stalin didn't walk without Yezov at his side).

- North Korea's photoshopped/cloned hovercraft.

- The Cottingley Fairies, Loch Ness monster, "saucer" UFOs: visual trickery or props employed to simulate the existence of beings or vehicles that don't exist in the real world.

- Pope with jacket is of course completely faked with AI.

And

"Things that happened, but are staged or misrepresent reality/mislead the viewer".

Examples:

- The UK soldiers abusing a prisoner. The claim was probably false (in the sense in this particlar photo these weren't British soldiers) but it's true they were soldiers from some country abusing a prisoner. To my knowledge no-one claimed the photo was staged, just that it was misrepresenting the situation.

- Capa's Falling Soldier photo. This actually happened, it's just that it's likely staged.

They are not the same thing, and require different levels of skill!

AI facilitates creating anything, especially completely synthetic and fake. You don't even need to go to the location to take a photo and edit it.

"By the 1940s, the image without the groom had become the standard version, and it created the enduring visual signs of the strongman leader – when Nigel Farage makes a speech atop a tank, or Vladimir Putin displays his bare chest, both are drawing on iconography developed by the Italian fascist."

Ah yes, equestrian portraits, something famously invented by the fascists. Someone should dig up Jacques-Louis so we can tell him he's a fascist now.

  • Saying Mussolini developed iconography involving equestrian portraits is not the same as saying he invented equestrian portraits.

Surprised the article makes no mention of the 2023 AI-assisted enhancement of the Patterson-Gimlin Bigfoot clip. It's definitely a guy in a gorilla suit.

https://www.indy100.com/science-tech/bigfoot-footage-ai-sigh...

  • How could AI not make it look more like a man? Was the AI trained on lots of bigfoot footage? Or was it trained on lots of pictures of people? Give it enough leeway and it will probably render bigfoot as a man in a Barney costume, if that better confirms to the training data.

    • AI wasn't used to generate the clip, but to add some (hallucinated) detail and extend the background. FWIW, in pre-genAI stabilized examples from the 2000s it's also clearly a guy in a gorilla suit.

  • Why does stabilizing the image make it any more or less apparent?

    • I think it just means it removes the distractions of the grain and shaky camera.

      But really, it was always evident it was a guy in a gorilla suit.

    • You can see the link I posted (https://youtu.be/Vsj0vK8LjVk). To my eye it makes it more clear that it is just a dude walking like any human in a costume would.

      I don't recommend it, but there is an image-stabilized Zapruder film out there that makes the Kennedy assassination a good deal more shocking/gruesome. You've been warned.

      1 reply →

  • Is there any doubt it's a gorilla suit? I think the article is disingenuous in not stating this clearly.

    The article claims the suits of the apes in Planet of the Apes were "unconvincing", but they are just as convincing as the Bigfoot image, which is to say: they are clearly (nicely made) costumes.

    We didn't need AI to "prove" what was already evident. And let me assure you -- this won't convince conspiracy theorists and Bigfoot fans, because above all, like Mulder, they "want to believe".