← Back to context

Comment by Nevermark

7 hours ago

It really isn’t that slop didn’t exist before.

It is that it is increasingly becoming indistinguishable from not-slop.

There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.

And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.

It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.

True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).

But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.

If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.

  • You are right that text had this problem.

    But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.

    And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.

    The big problem isn’t the tech getting smarter though.

    It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.

    Incentives, as they say, matter.

    Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.

    Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.