Comment by josfredo
4 hours ago
I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.
> “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
[1] https://en.wikipedia.org/wiki/Brandolini%27s_law
[2] https://www.youtube.com/CaptainDisillusion
The current situation is not as bad as it can get; this is accelerant on the fire and it can get a lot worse
I've been using "It will get worse before it gets worse" more and more lately
It really isn’t that slop didn’t exist before.
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).
But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.
If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.
You are right that text had this problem.
But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.
And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.
The big problem isn’t the tech getting smarter though.
It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.
Incentives, as they say, matter.
Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.
Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.