Comment by figassis
10 hours ago
The problem seems to be identify, a real problem, and looks like it will only get worse. Would creating a zero knowledge digital identity service (maybe centralized, maybe decentralized idk), where you prove you're human via your government id, passport, driver license, whatever, and the service can then attest you're a real person? So if I'm Digg, I would ask for some form of OAuth, the system would simply verify that you are in fact a human, and you would go on to create your account, forever verified. This way the identity service only does identity, it does not keep a record of where you are attesting, no logs, nothing, just your identity and basically saying yes/no, no sharing of ids or any other data.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
1/ KYC is pricey, and users might not want to pay for it
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
I think apps that want assurance of your identity should pay for your kya. They want valuable people after all, and this should go into their CAC. Users still pay nothing, the identity service does not care about their info, after verification, it drops any details, like uploaded documents, whatever, keeps a certificate.
The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.
Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.
But they can just plug an AI into a verified account.
I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.
So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.
The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.
I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.
But then if the AI is detected that person can be permanently banned. No more AI. No new accounts.
So if someone compromises your identity they can unperson you? How will the AI be detected? By another AI?
1 reply →
No, the problem is people want everything for free. The solution is very simple. Charge $5 to open an account. Only allow a person to moderate one forum/community/subreddit/etc... Delete accounts that break rules ruthlessly. This would work, but no one on the internet wants to pay for a quality forum so we deal with the same crap over and over and over and pretend like there is some other soultion.
They want ad supported so they can block all the ads and let the suckers pay. Then complain relentlessly when the content caters to suckers.