Comment by nickandbro
15 hours ago
These image gen models are getting so advanced and life like that increasingly the general public are being duped into believing AI images are actually real (ex Facebook food images or fake OF models). Don't get me wrong I will enjoy the benefits of using this model for expressing myself better than ever before, but can't help feeling there's something also very insidious about these models too.
It's more likely than not that every single person who uses the internet has viewed an AI image and taken it as real by now.
The obvious ones stand out, but there are so many that are indiscernible without spending lots of time digging through it. Even then there are ones that you can at best guess it's maybe AI gen.
People will continue to retreat into walled, trusted networks where they can have more confidence in the content they see. I can’t even be sure I’m responding to a real person right now.
As long as HackerNews community keeps the quality of the conversation high (with or without AI), I don’t think many of us will question this too much
1 reply →
At the point now where basically any photo that isn't shared by someone I trust or a reputable news organisation is essentially unverifiable as being real or not
The positive aspect of this advance is that I've basically stopped using social media because of the creeping sense that everything is slop
Maybe not an actual argument for anything, but even before these image models everyone that used the internet had seen a doctored image they believed to be real. There was a reason that 'i can tell by the pixels' was a meme.
At least some of the comments here are likely AI-generated
people only notice when they are prompted to look for AI or scrutinize AI
a lot of these accounts mix old clips with new AI clips
or tag onto something emotional like a fake Epstein file image with your favorite politician, and pointing out its AI has people thinking you’re deflecting because you support the politician
Meanwhile the engagement farmer is completely exempt from scrutiny
Its fascinating how fast and unexpected the direction goes
I actually think this was a good thing. Manipulating images incredibly convincingly was already possible but the cost was high (many hours of highly skilled work). So many people assumed that most images they were seeing were "authentic" without much consideration. By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important. People have always said that you can't believe what you see on the internet, but unfortunately many people have managed without major issue ignoring this advice. This wave will force them to take that advice to heart by default.
I remember telling my parents at a young age that I couldn't be sure Ronald Reagan was real, because I'd only ever seen him on TV and never in real life, and I knew things on TV could be fake.
That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.
I agree. Too many adults are fooled by fake news and propaganda and false contexts. And CNN and Fox are more than happy to take advantage of this.
My personal rule of thumb is if it generates outrage, it's probably fake, or at least a fake interpretation. I know that outrageous stuff actually happens pretty often, so I'll dig into things I find interesting. But most of the time it's all just garbage for clicks.
I used to also have this optimistic take, but over time I think the reality is that most people will instead just distrust unknown online sources and fall into the mental shortcuts of confirmation bias and social proof. Net effect will be even more polarization and groupthink.
> By making these fake images ubiquitous we are forcing people to quickly learn
That's quite the high opinion on the self-improvement ability of your Average Joe. This kind of behavior only comes with an awareness, previously learned, and an alertness of mind. You need the population at large to be able to do this. How if not, say, teaching this at schools and waiting for the next generation to reach adulthood, would you expect this to happen?
I agree that improvement for the Average Joe will be very hard. I also think that taking more attention to teach the younger generation is vitally important. But mostly I don't see an alternative. I don't think we can protect people from fake information without giving up our freedom, and that isn't a viable alternative in my mind. So what is left but trying our hardest to teach people to think critically?
1 reply →
> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.
Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.
Most people choose to believe, which is why they have a hard time confronting facts.
> I know plenty of seniors
And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.
When it comes to graphic content on the internet I usually consume it's for entertainment purposes. I didn't care where it came from before and don't care today either. Low quality content exists in both categories, a bit easier to spot in AI generated, so it's actually a bonus.
I feel like there is one or two generations of people who are tech savy and not 100% gullible when it comes to online things. Older and younger generations are both completely lost imho, in a blind test you wouldn't discern a monkey from a human scrolling tiktok &co
How so? This "tech savvy and not 100% gullible" generation, gave birth to a political landscape dominated by online ragebait.
1 reply →
In reality: millions of boomers are scrolling FB this very minute reacting to the most obviously fake rage/surprise/love bait AI slop you've ever seen.
They were scrolling through fake bait long before generative AI
>fake OF models
Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.
People already have access to every form of niche pornography they could dare to imagine (for absolutely free!), I really doubt that 'personal taste' is the part that makes OF models their money. They'll be fine.
I think you're under-estimating how much personal taste applies in that industry. Yes, there's a lot of free content but it's often low quality and/or difficult to find for a particular niche. The OF pages, and other paid sites, are curated collections of high quality stuff that can satisfy particular cravings repeatedly with minimal effort.
A big part of it also the feeling of "connection" with the creator via messages and what not, but that too can be replicated (arguably better) by AI. In fact, a lot of those messages are already being generated haha.
4 replies →
Even ignoring the model censorship making high quality sexual imagery/videos not possible, this is a crazy take. You think OF models are making money because it's the only way to see a nude man/woman with particular characteristics on the internet?
You're completely misunderstanding what the product being sold is.
If you don't think that OF models are using AI to reply to incoming chats from users, well I've got a bridge to sell ya.
6 replies →
> Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.
net positive to society
In what way? Certainly not for the models, who lose their income/job. Probably not better for the consumer, either.
4 replies →
And this can't come soon enough.
Coming soon... YOU!
You can’t really because these powerful models are censored. You can create lewd pictures with open models but they aren’t nearly as good or easy to use.
I’ve seen some very high quality NSFW AI video in the last few months. Those models are not far behind and the search and training space for porn is smaller than being able to generate anything
2 replies →
Because models can be used to alter existing images, you can use open and commercial models together in content creation workflows (and also the available findings of open models, and the ability to further tube them very specific used, are quite powerful on their own), so the censorship on the commercial models has a lot less effect on what motivated people can produce than you might think.
I still think, even with that, that like most predictions of AI taking over any content industries, the short-term predictions are overblown.
Doesn't Grok allow users to create lewd content or did they roll that back?
Also, I suspect that we'll soon see the same pattern of open weights models following several months behind frontier in every modality not just text.
It's just too easy for other labs to produce synthetic training data from the frontier models and then mimic their behavior. They'll never be as good, but they will certainly be good enough.
Just a matter of time and open models will get there. Not once have we seen a moat across the model spectrums.
I don't think so. Talking to people in this space, I've found out about broad camps. There are probably more:
-They simply aren't into real women/men (so you couldn't even pay a model to do what they're looking for).
-They want to play out fantasies that would be hard to coordinate even if you could pay models (I guess this is more on the video side of things, but a string of photos can put be together into a comic)
-They want to generate imagery that would be illegal
Based on this, I would guess fetish artists (as in illustrators) are more at risk than OF models. However, AI isn't free. Depending on what you're looking for, commissions might be cheaper still for quite a while...
Lily Allen Says Her OnlyFans Feet Pictures Make More Money Than Spotify Streams: ‘Don’t Hate the Player, Hate the Game’ : https://variety.com/2024/music/news/lily-allen-onlyfans-feet...
And they might have to gasp! get an honest job!
I don't know much about that side of things, but I presume that's hard work! Maybe not always so honest though.
That's a pretty wide brush you are painting with there
1 reply →
Don’t think the demand for real OF is going anywhere
How do you know they’re real right now?
A lot of escorts have OF profiles.
Jaded, but if I knew there was a possibility of a bunch of incriminating footage of me (images, video, etc.) out there in the pre-AI days, I would do my absolute best to flood the internet with as many related deepfakes (including of myself) as possible.
> Facebook food images or fake OF models
What in the world is a fake OF model?
Does "OF" stand for "of food"?
It stands for "OnlyFans" a website originally for creators to engage directly with their audiences but quickly became a website where women sold explicit pictures of themselves to subscribers.
TIL it wasn't created to be a porn site
1 reply →
Surely this is a problem that we will never be able to solve.
Oh we’ve seen nothing yet of the chaos that generative ai will unleash on the world, looking at Meta platforms it’s already a multi million dollar industry of selling something or someone that doesn’t exist. And that’s just the benign stuff.
This has been true for a while with digital art, photoshop, etc. Over time, people's BS detectors get tuned. I mean, scrolling by quickly in a feed, yeah, you might miss if an image is "real" or not, but if you see a series of photos side by side of the same subject (like an OF model), you'll figure it out.
Also, using AI will not allow you to better express yourself. To use an analogy, it will not put your self-expression into any better focus, but just apply one of the stock IG filters to it.
> a series of photos side by side of the same subject
Cameras are now "enhancing" photos with AI automatically. The contents of a 'real' photo are increasingly generated. The line is blurring and it's only going to get worse.
It's shitty, but I think it's almost as bad that people are calling everything AI. And I can't even blame them, despite how infuriating it is. It's just as insidious that even mundane things literally ARE AI now. I've seen at least twice now (that I'm aware of) where some cute, harmless, otherwise non-outrageous animal video was hiding a Sora watermark. So the crazy shit is AI. The mundane shit is AI. You wonder why everyone is calling everything AI now. :P
It seems like a low level paranoia - now I find myself double checking that the youtube video I'm watching isn't some AI slop. All the creators use Getty b-rolls and increasingly AI generated stuff so much that it's not a far stretch to have the voice and script all be auto generated too.
I suppose if the AI was able to tell me a true and compelling story, I might not even mind so much. I just don't want to be spoon fed drivel for 15 minutes to find it was all complete made up BS.
[dead]