← Back to context

Comment by vladms

5 hours ago

True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).

But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.

If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.

You are right that text had this problem.

But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.

And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.

The big problem isn’t the tech getting smarter though.

It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.

Incentives, as they say, matter.

Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.

Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.