← Back to context

Comment by intended

13 hours ago

Our society, pre internet, built systems to manage trust. The conditions that allowed those systems to exist (the speed of transmission of data, the ratio of content generation to verification, the ability to shape consensus), have changed.

You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.

The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.

We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.

Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.

That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.

Very interesting, I've thought in a completely different direction, towards human verification. "IRL KYC for friends" or something

I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:

I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?

(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)

  • Proof of human is the natural first stop.

    Your solution shares its essence with a club, a WhatsApp group or interest group.

    It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.

    That is the broader environment you are a part of.

    Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.

    If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.

    The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.

    To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.