Comment by thecatapps

3 days ago

With all of the discourse around hardware attestation, digital ID, and age verification in recent weeks/months, is there actually any good solution to the problems these existing tools (Privacy Pass, WEI, Fraud Defense, uploading IDs) claim to solve? Are there open and privacy-preserving standards that can solve the problem of bots and minors? If not, what would be required to establish one, and is it realistic?

Businesses will do what businesses will do, but it seems to me having something to point to and saying "do this instead" is more effective than "this sucks and isn't even about security, don't do this at all" even though it's true.

What even is the problem? I keep my kids computers in the living room where it's easy to see what they are doing. Their lan shuts down at night when I'm asleep. They don't get full control of their own cell phone until they are around 16-years old. Bots on social media discourage me from using it which is a Good Thing if you ask me.

  • The problem is that companies have a legitimate reason to want to block AI agents and verify the users are actually real. And it's incredibly difficult to do that when the old methods of clicking on squares or reading blurry words don't work anymore.

    Solving proof of humanity is very difficult without tying to some kind of difficult to replicate or automate ID.

    • > The problem is that companies have a legitimate reason to want to block AI agents and verify the users are actually real.

      Sucks that they have a hard problem like that. Taking away everyone's freedom to exercise ownership of their general purpose computing devices and destroying online anonymity shouldn't be the answer (or, at least, we shouldn't stand for it). Maybe they can spend some of their billions in revenue on it.

  • > Bots on social media

    ... are not problems, no - but bots in general are

There is a good solution to these problems. Exhaustive punishments and forcefully ceasing operations for repeat offenses.

China has all the tech giants jumping through whatever hoops they want by banning them by default and only allowing whichever ones they want to operate after they meet their strict policies and ad hoc decisions.

Now that the US has decided the EU is a rival, the EU should do the same.

Thank you for offering this take -- it is the only forward looking one.

The anonymous internet is going away -- it is too supportive of crime and various kinds of gray area misconduct, and governments and large corporations were eventually going to do something about that.

Such a degree of anonymity is desirable, but it is not a requirement for a free society. What were things like before the internet? You couldn't anonymously browse billions of pages of information in 1960.

> Are there open and privacy-preserving standards that can solve the problem of bots and minors? If not, what would be required to establish one, and is it realistic?

Ideally there shouldn't be standards for this. What we have already is enough.

Companies claiming they are closing down their services/devices to protect the users is total BS. Facebook has admitted they get 10% of their ad revenue from scams, and that's the reason they won't go after scammers on their platforms.

Same can be said for Google. They could come up with numerous ways to block bots or make captchas harder for actual bots (while also not flagging every non-Chrome user as a potential bot, like they do nowadays), but they pretend this is an unsolvable problem that requires a nuclear solution, it used to be Web DRM but now it's called Fraud Defense.

  • I disagree. Bots have always been an issue, but now every form of CAPTCHA that can be solved by a human can also be solved by a multi-modal language model. Bots are slowly taking over in forums where they previously would have been immediately spotted and banned.

    If the only argument you can make every time someone proposes an onerous, privacy-destroying solution to this problem is deny the problem exists, you're going to lose.

    GP is correct, we need an alternative we can point to.

    • > Bots are slowly taking over in forums where they previously would have been immediately spotted and banned.

      Failed to mention this but part of the reason for this is that even after getting past CAPTCHA and creating an account, spam bots in online communities have always had to pass a sort of informal ongoing Turing test to not get outed as a bot by human users and banned. In the past, that would happen almost instantly as soon as the bot posted anything. Now they can often go undetected by even human mods for a long time, maybe even indefinitely.

The people pushing for age verification have already said that they want to know who's behind every account on every website on the entire Internet. They won't accept any open or privacy-preserving standard.