← Back to context

Comment by JimDabell

7 months ago

> So I think we will need a different detection mechanism. Maybe something from the real world, some type of ID, or even micropayments. I'm not sure, but it's clear that bot detection is at the opposite, and currently losing, side of the AI race.

I think the most likely long-term solution is something like DIDs.

https://en.wikipedia.org/wiki/Decentralized_identifier

A small number of trusted authorities (e.g. governments) issue IDs. Users can identify themselves to third-parties without disclosing their real-world identity to the third-party and without disclosing their interaction with the third-party to the issuing body.

The key part of this is that the identity is persistent. A website might not know who you are, but they know when it’s you returning. So if you get banned, you can’t just register a new account to evade the ban. You’d need to do the equivalent of getting a new passport from your government.

But this mean that now a saas baning you from your account for spurious reason can be a serious problem.

  • You could roll a new id to replace the previous one. Each user would still have only one at a time. If this isn't acceptable a service may ask to have the feature disabled for clear mission critical reasons and/or a fee.

  • That’s the point. Bans should be effective.

    • I get it. And also, I know that Apple and Google would abuse that, and destroy lives and businesses as casually as I eat my breakfast. Then 1000's of disposable companies would pop up with valid id, and abuse some system (like terrible DMCA) and make it worse.

      If you think people self-censoring themselves on social media is now a problem (the "unlive" novlang is always such a dystopic hint to me), you have seen nothing.

      5 replies →

https://www.wired.com/story/worldcoin-sam-altman-orb/

It also allows automated software to act on behalf of a person, which is excellent for assistive technologies and something most current bot detection leaves behind.

  • I think this will be a positive effect of the rise of AI agents. We’re going to have a much different distribution of automated vs human traffic and authentication/methods will have to be more robust than they are now

On the one hand, yes, this might work, but I'm concerned that it will inevitably require loss of anonymity and be abused by companies for user tracking. I suppose any type of user identification or fingerprinting is at the expense of user privacy, but I hope we can come up with solutions that don't have these drawbacks.

  • The benefit of majorly reducing fraud can create an ecosystem where the trade off is worth it for users to take. For example generous free plans or trials can exist without companies needing to invest so much in antifraud for them.

  • > I'm concerned that it will inevitably require loss of anonymity and be abused by companies for user tracking.

    Are you sure you read my comment fully?

    • I did. It doesn't matter that the website might not be able to directly associate a real-world identity with a digital one. It takes a small number of signals to uniquely fingerprint a user, so it's only a matter of associating the fingerprint with the ID, whether that's a real-world or digital one. It can still be used for tracking. By having a static ID that can only be issued by governments or approved agencies we'd only be making things easier for companies to track users.

      6 replies →

If this gets implemented, the next thing the govt will do is require all websites to store DIDs of visitors for at least 10 years and not accept visitors without them.

  • This makes no sense at all. If a government wanted to pass a law to force logins and track people, they could do that today without using an identifier that is worthless for that purpose.

I have not heard about DIDs at all before. How does this really work? They are Government-issued? I am not sure I would trust that though.