← Back to context

Comment by SilverSlash

11 hours ago

I really wish Google will flag videos with any AI content, that they detect.

It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.

Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.

  • It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.

    • Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!

  • > eventually AI content will be indistinguishable from real-world content

    You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.

  • I said something to a friend about this years ago with AI... We're going to stretch the legal and political system to the point of breaking.

  • It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.