Comment by seiferteric
20 days ago
My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
I'm pretty sure we're already decades in to the world of "has created".
Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite
Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.
I didn't use it disparagingly.
In fact, it's easier than ever to see the intended benefit of such a lifestyle.
1 reply →
Which also has a term with stigma: hipster
2 replies →
> Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits
They didn't say to avoid certain tech. They said to avoid takes and news headlines.
Your conflation of those two is like someone saying "injecting bleach into your skin is bad" and you responding with "oh, so you oppose cleaning bathrooms [with bleach]?"
1 reply →
it's malware in the mind. it was happening before deep fakes was possible. news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells. now we have tools that pretty much just accelerate and automate that process. it's interesting. it would be helpful to figure out how to prevent people (especially upcoming generations) from getting swept away by all this.
I think fatigue will set in and the next generation will 'tock' back from this 'tick.' Getting outraged by things is already feeling antiquated to me, and I'm in my 30's.
There's a massive industry built around this on YT, exemplified by the OP's post about his parents. To a first-order approximation, every story with a theme of "X does sexist/racist/ageist/abusive thing to Y and then gets their comeuppance" on YouTube is AI-generated clickbait. The majority of the "X does nice thing for Y and gets a reward or surprise" dating from the last year or two are also AI-generated clickbait, but far more of the former. Outrage gets a lot more clicks than compassion.
> news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells.
As someone who’s read a newspaper daily for 30+ years, that is definitely not true. The news has always tried to capture your attention but doing so using anger and outrage, and using those exclusively, is a newer development. Newspapers and broadcast news used to use humor, suspense, and other things to provoke curiosity. When the news went online, it became focused on provoking anger and outrage. Even print edition headlines tend to be tamer than what’s in the online edition.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite
It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?
Being led around by the nose is a choice still, for now at least.
I think the comment you're replying to isn't necessarily a question of opting out of such news, it's the fact that it's so hard to escape it. I swipe on my home screen and there I am, in my Google news feed with the constant barrage of nonsense.
I mostly get gaming and entertainment news for shows I watch, but even between those I get CNN and Fox News, both which I view as "opinion masquerading as news" outlets.
My mom shares so many articles from her FB feed that are both mainstream (CNN, etc) nonsense and "influencer" nonsense.
3 replies →
>I'm pretty sure we're already decades in to the world of "has created".
Simulacra and Simulation came out in '81, for an example of how long this has been a recognized phenomenon
I honestly think it might be downstream of individualized mass-market democracy; each person is tasked with fully understanding the world as it is so they can make the correct decisions at all level of voting, but ain't nobody got time for that.
So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.
The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).
Completely agree, but at the same time I can't bring myself to believe that reinforcing systems like the electoral college or reinstating a state-legislature-chosen Senate would yield better outcomes.
Most people I know who have strong political opinions (as well as those who don't) can't name their own city council members or state assemblyman, and that's a real problem for functioning representative democracy. Not only for their direct influence on local policy, but also because these levels of government also serve as the farm team or proving grounds for higher levels of office.
By the time candidates are running with the money and media of a national campaign, in some sense it's too late to evaluate them on matters of their specific policies and temperaments, and you kind of just have to assume they're going to follow the general contours of their party. By and large, it seems the entrenched political parties (and, perhaps, parties in general) are impediments to good governance.
1 reply →
I disagree.
Voting on principles is fine and good.
The issue is the disconnect between professed principles and action. And the fact that nowadays there are not many ways to pick and choose principles except two big preset options.
It's easier to focus on fewer representatives, and because the federal government has so much power (and then state governments), life-changing policies mainly come top-down. Power should instead flow bottom-up, with the top being the linchpin, but alas.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite.
It’s quite easy actually. Like the OP, I have no social media accounts other than HN (which he rightfully asserts isn’t social media but is the inheritor of the old school internet forum). I don’t see the mess everyone complains about because I choose to remove myself from it. At the same time, I still write code every day, I spend way too much time in front of a screen, and I manage to stay abreast of what’s new in tech and in the world in general.
Too many people conflate social media with technology more broadly and thus make the mistake of thinking that turning away from social media means becoming a luddite. You can escape the barrage of trolls and hottakes by turning off social media while still participating in the much smaller but saner tech landscape that remains.
Nothing wrong with being a Luddite these days. It’s the only way to not have your mind assaulted.
I feel like you people are intentionally misconstruing what "Luddite" means. It doesn't mean "avoids specific new tech." It means "avoiding ALL new tech because new things are bad."
A luddite would refuse the covid vaccine. They'd refuse improved trains. They'd refuse EVs. etc. This is because ludditism is the blanket opposition to technological improvements.
2 replies →
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite.
Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.
Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.
To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.
So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.
So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.
> Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.
This has been going on since Usenet. Nothing new.
Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses. By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
If there was a GDPR type law for any company above a certain size (so as to only catch the big Ad networks) that allowed the propagation of "false" ads claiming security issues, monetary benefits or governmental services, then it could stop transmission of most of the really problematic ads, because any company the size of Google is also in the risk minimization business and they will set up a workflow to filter out "illegal" ads to at least a defensible level so they don't get fines that cost more than the ads pay.
Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.
Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.
Not scold (that is how we shape social behavior), but only note that Safe Harbor essentially grants the opposite of this (away from the potential default of "By transiting malware you are complicit and liable in the effect of the malware") so it'd have to be a finely-crafted law to have the desired effect without shutting down the ability to do both online advertising and online forums at all.
... which doesn't sound impossible. It's also entirely possible that the value of Section 230 has run its course and it should generally be remarkably curtailed (its intent was to make online forums and user-generated-content networks, of which ad networks are a kind, possible, but one could make the case that it has been demonstrated that operators of online forums have immense responsibility and need to be held accountable for more of the harms done via the online spaces they set up).
People just need to have like a barebones understanding of: computer hardware level, OS level, browser level and how permissions work between the three.
If you have that you will never get scared by a popup in Chrome.
If there are ad incentives, assume all content is fake by default.
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
9 replies →
>and be absolutely ruthless in permabanning anyone who posts AI content unmarked,
It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.
I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.
Usenet died partly due to the ads, and the inability for adblocking software at the time to keep up.
People left and never came back.
But those bots were certainly around in the 90s
Worst is... the bots, spam and ads are still there, even if there is no-one to read them. Usenet might still be alive (for piracy/binaries at least), and maybe a handfull of still-alive text-groups, but in the text-groups I used to read, it's nothing but a constant flow of spam since 15+ years.
Of course - because everyone is banned upon first suspicion.
I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.
Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)
You're training yourself with a very unreliable source of truth.
> Those subreddits label content wrong all the time.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
1 reply →
> You're training yourself with a very unreliable source of truth.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
3 replies →
"I may have to accept that this is just the new reality - never quite knowing the truth."
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
https://en.wikipedia.org/wiki/I_know_that_I_know_nothing
I’m really hoping that we’re about to see an explosion in critical thinking and skepticism as a response to generative AI.
Any day now… right?
5 replies →
Before photography, we knew something was truthful because someone trustworthy vouched for it.
Now that photos and videos can be faked, we'll have to go back to the older system.
Yeah this is what I always expected to happen. Cryptographic signing of source material so you can verify who the initial claimant is, and base credibility on the identity of that person.
It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.
2 replies →
Ah yes the good old days of witch trials and pogroms.
I am no big fan of AI but misinformation is a tale as old as time.
My favorite theory about those subreddits is that it's the AI companies getting free labeling from (supposed) authentic humans so they can figure out how to best tweak their models to fool more and more people.
What if AI is running RealOrAI to trick us into never quite knowing the truth?
a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better
> [...] and lack a consistent subject/background—and that's not something that'll go away when AI videos get better
Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?
AI is capable of consistent characters now, yes, but the platforms themselves provide little incentive to. TikTok/Instagram Reels are designed to serve recommendations, not a user-curated feed of people you follow, so consistency is not needed for virality
Or they are reposting other people's content
Sort by oldest. If the videos go back more than 3 years watch an old one. So many times the person narrating the old vids is nothing like the new vids and a dead ringer for AI. If the account is less than a year old, 100% AI.
New AI narration is a dead giveaway to us but many people can’t tell they’re not listening to a human. It is very concerning.
How can they look repetitive while being inconsistent? Do you mean in terms of presentation / "editing" style?
I actually avoid most YouTube channels that upload too frequently. Especially with consistent schedules.
Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.
Content farms, whether AI generated or not their incentive is to pump out low quality high output. Most of their content even it involves a human narrator are heavily packed with AI generated media.
A giveaway for detecting AI-generated text is the use of em-dashes, as noted in op - you are caught bang to rights!
Some keyboards and operating systems — iOS is one of them — convert two dashes into an emdash.
6 replies →
Not long ago, a statistical study found that AI almost always has an 'e' in its output. It is a firm indicator of AI slop. If you catch a post with an 'e', pay it no mind: it's probably AI.
Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.
10 replies →
My AI generator loves to write ergo, concordantly, and vis-a-vis.
As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.
This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
> Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.
People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.
In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.
ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.
3 replies →
> In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).
> outraged at people who think racism is a problem.
This is one level of abstraction more than I deal with on a normal day.
The fake video which plays into people’s indignation for racism, is actually about baiting people who are critical about being baited by racism?
1 reply →
[flagged]
I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.
But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.
I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.
The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.
[1] https://www.tiktok.com/@gossip.goblin
20 replies →
I hadn't heard that saying.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )
[flagged]
21 replies →
I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"
Right... when you overhear the elderly in the gym locker rooms talk about "the Mexicans that keep moving in" yeah racism is so short in supply....
Wut? If you listen to what real people say, racism is quite common has all the power right now.
Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.
Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.
[flagged]
I'm noticing more of these race baiting comments on YC too lately. AI?
2 replies →
You sure about that? I think actions of the US administration together with ICE and police work provide quite enough
Wrong takeaway. There are plenty of real incidents. The reason for posting fake incidents is to discredit the real ones.
That's why this administration is working hard to fill the demand.
Political!
1 reply →
I find the sound is a dead giveaway for most AI videos — the voices all sound like a low bitrate MP3.
Which will eventually get worked around and can easily be masked by just having a backing track.
that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)
The em-dash=LLM thing is so crazy. For many years Microsoft Word has AUTOCORRECTED the typing of a single hyphen to the proper syntax for the context -- whether a hyphen, en-dash, or em-dash.
I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...
3 replies →
The audio artifacts of an AI generated video are a far more reliable heuristic than the presence of a single character in a body of text.
2 replies →
Thank you for saving me the time writing this. Nothing screams midwit like "Em-dash = AI". If AI detection was this easy, we wouldn't have the issues we have today.
Of note is theother terrible heuristic I've seen thrown around, where "emojis = AI", and now the "if you use not X, but Y = AI".
5 replies →
No one uses em dashes
19 replies →
I really wish Google will flag videos with any AI content, that they detect.
It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
2 replies →
I said something to a friend about this years ago with AI... We're going to stretch the legal and political system to the point of breaking.
> eventually AI content will be indistinguishable from real-world content
You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.
It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.
Would be nice, but unlikely given that they are going in the opposite direction and having YouTube silently add AI to videos without the author even requesting it: https://www.bbc.com/future/article/20250822-youtube-is-using...
Wow! I hadn't seen this, thanks. Do you think they are doing it with relatively innocent motives?
1 reply →
Eventually it will make everyone say that videos are fake because nobody trusts videos anymore. We will ironically be back to something like the 40s where security cameras didn't exist and photography was rare and relatively expensive. A strange kind of privacy.
In reciprocity, my parents call "AI" anything they don't like or want to believe.
We truly live in wonderful times!
Next step: find out whether Youtube will remove it if you point it out
Answer? Probably "of course not"
They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably
The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.
That was already the case for anything printed or written. You have no way of telling if this is true or not.
I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.
> “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
[1] https://en.wikipedia.org/wiki/Brandolini%27s_law
[2] https://www.youtube.com/CaptainDisillusion
Seems rather simple to solve to me.
Just have video cameras (mostly phones these days) record a crypto hash into the video that the video sharing platforms read and display. That way we know a video was recorded with the uploader's camera and not just generated in a computer software.
There aren't that many big tech companies that are responsible for creating the devices people use to record and host the platforms and software that people use to play back the content.
The current situation is not as bad as it can get; this is accelerant on the fire and it can get a lot worse
I've been using "It will get worse before it gets worse" more and more lately
It really isn’t that slop didn’t exist before.
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).
But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.
If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.
1 reply →
>AI makes it easier
How many people were getting quote tweeted on Twitter with deep fake porn of them before Grok could remove the clothes off your person with a simple prompt?
It's sad, yeah. And exhausting. The fact that you felt something was off and took the time to verify already puts you ahead of the curve, but it's depressing that this level of vigilance is becoming the baseline just to consume media safely
Humans have strong self preservation instincts so at some point they'll start to ignore the free internet content or treat it as no more than fiction. There might come the time when people will demand personal meetings for everything, kind of a return to nature because they won't trust anything that can be machine generated. In way that's positive as current trends of weakening deeper human connections will be reversed thanks to AI :)
I think people will eventually learn to not trust any videos or stories they see online. I think the much bigger issue will be what happens when the LLM providers encode "alignment" into the models to insist on certain worldviews, opinions, or even falsehoods. Trust in LLMs and usage of them is increasing.
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
"Eventually" does alot of heavy lifting in your prediction. This is like saying that if you feed poison to panda bears, they will eventually become immune to poison. On what timescale though? 8 million years from now, if the species survived, and if I've been feeding that poison to each and every generation... sure.
If I just feed it to 10 pandas, today, they're all dead.
And I suspect that humanity's position in this analogy is far closer to the latter than the former.
People stopped falling for photoshopped pictures and staged Chinese reels pretty quickly. I think people will pretty quickly decide anything outrageous is probably AI. And by people I mean the right half of the bell curve, which is all you can hope for. The left half will have problems in the world as they always have.
You don't need AI for that.
https://youtu.be/xiYZ__Ww02c
There are top posts daily of either 100% false images or very doctored images to portray a narrative (usually political or social) on reddit.
Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!
> Will create
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
Google is complicit in this sort of content by hosting it no questions asked. They will happily see society tear itself apart if they are getting some as revenue. Same as the other social media companies.
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.