Comment by seiferteric

12 hours ago

My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.

    divisiveness this kind of stuff will create

I'm pretty sure we're already decades in to the world of "has created".

Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.

And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.

It's just been too much for too long and you can tell.

  • > It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite

    Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.

  • it's malware in the mind. it was happening before deep fakes was possible. news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells. now we have tools that pretty much just accelerate and automate that process. it's interesting. it would be helpful to figure out how to prevent people (especially upcoming generations) from getting swept away by all this.

Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses. By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.

Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.

Maybe i should setup a Pi-Hole business...

  • If there was a GDPR type law for any company above a certain size (so as to only catch the big Ad networks) that allowed the propagation of "false" ads claiming security issues, monetary benefits or governmental services, then it could stop transmission of most of the really problematic ads, because any company the size of Google is also in the risk minimization business and they will set up a workflow to filter out "illegal" ads to at least a defensible level so they don't get fines that cost more than the ads pay.

    Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.

    Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.

If there are ad incentives, assume all content is fake by default.

On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.

  • That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.

    Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever

    The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.

    • It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.

      Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.

      There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.

      1 reply →

I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.

  • Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)

    You're training yourself with a very unreliable source of truth.

    • > Those subreddits label content wrong all the time.

      Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.

    • > You're training yourself with a very unreliable source of truth.

      I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.

      If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?

      Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?

      1 reply →

  • Before photography, we knew something was truthful because someone trustworthy vouched for it.

    Now that photos and videos can be faked, we'll have to go back to the older system.

    • It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.

      2 replies →

    • Ah yes the good old days of witch trials and pogroms.

      I am no big fan of AI but misinformation is a tale as old as time.

  • My favorite theory about those subreddits is that it's the AI companies getting free labeling from (supposed) authentic humans so they can figure out how to best tweak their models to fool more and more people.

a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

  • > [...] and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

    Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?

    • AI is capable of consistent characters now, yes, but the platforms themselves provide little incentive to. TikTok/Instagram Reels are designed to serve recommendations, not a user-curated feed of people you follow, so consistency is not needed for virality

  • How can they look repetitive while being inconsistent? Do you mean in terms of presentation / "editing" style?

  • I actually avoid most YouTube channels that upload too frequently. Especially with consistent schedules.

    Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.

  • A giveaway for detecting AI-generated text is the use of em-dashes, as noted in op - you are caught bang to rights!

    • Not long ago, a statistical study found that AI almost always has an 'e' in its output. It is a firm indicator of AI slop. If you catch a post with an 'e', pay it no mind: it's probably AI.

      Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.

      9 replies →

As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.

  • This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).

    In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

    We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.

    Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

    • > Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

      It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.

      People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.

      In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.

      ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.

    • > In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

      I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).

    • I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.

      But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.

      I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.

      The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.

      [1] https://www.tiktok.com/@gossip.goblin

      12 replies →

  • I hadn't heard that saying.

    Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.

    Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.

    In various forms, with various levels of harm, and with various levels of evidence available.

    (Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)

    Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.

    (Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )

  • I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"

  • Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.

    Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.

  • You sure about that? I think actions of the US administration together with ICE and police work provide quite enough

  • Wut? If you listen to what real people say, racism is quite common has all the power right now.

  • Wrong takeaway. There are plenty of real incidents. The reason for posting fake incidents is to discredit the real ones.

It's sad, yeah. And exhausting. The fact that you felt something was off and took the time to verify already puts you ahead of the curve, but it's depressing that this level of vigilance is becoming the baseline just to consume media safely

I really wish Google will flag videos with any AI content, that they detect.

  • It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.

    Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.

    • It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.

      2 replies →

    • > eventually AI content will be indistinguishable from real-world content

      You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.

    • I said something to a friend about this years ago with AI... We're going to stretch the legal and political system to the point of breaking.

    • It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.

The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.

  • That was already the case for anything printed or written. You have no way of telling if this is true or not.

I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.

  • > “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?

    Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.

    It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.

    But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.

    I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.

    [1] https://en.wikipedia.org/wiki/Brandolini%27s_law

    [2] https://www.youtube.com/CaptainDisillusion

  • The current situation is not as bad as it can get; this is accelerant on the fire and it can get a lot worse

  • It really isn’t that slop didn’t exist before.

    It is that it is increasingly becoming indistinguishable from not-slop.

    There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.

    And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.

    It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.

    • True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).

      But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.

      If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.

      1 reply →

I find the sound is a dead giveaway for most AI videos — the voices all sound like a low bitrate MP3.

Which will eventually get worked around and can easily be masked by just having a backing track.

  • that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)

    • The em-dash=LLM thing is so crazy. For many years Microsoft Word has AUTOCORRECTED the typing of a single hyphen to the proper syntax for the context -- whether a hyphen, en-dash, or em-dash.

      I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...

      3 replies →

    • Thank you for saving me the time writing this. Nothing screams midwit like "Em-dash = AI". If AI detection was this easy, we wouldn't have the issues we have today.

There are top posts daily of either 100% false images or very doctored images to portray a narrative (usually political or social) on reddit.

Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!

Next step: find out whether Youtube will remove it if you point it out

Answer? Probably "of course not"

They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably

Google is complicit in this sort of content by hosting it no questions asked. They will happily see society tear itself apart if they are getting some as revenue. Same as the other social media companies.

And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.