Comment by Barbing
3 hours ago
Very interesting, I've thought in a completely different direction, towards human verification. "IRL KYC for friends" or something
I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:
I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?
(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)
Proof of human is the natural first stop.
Your solution shares its essence with a club, a WhatsApp group or interest group.
It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.
That is the broader environment you are a part of.
Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.
If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.
The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.
To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.