Comment by AnthonyMouse
3 days ago
> The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting
I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.
The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it.
But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).
The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.
Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.
> But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature).
The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.
The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.
> you present the same unblinded signature to both services
You would never do this as it defeats the entire purpose of using blind signatures to begin with.
3 replies →
Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys.
You can make variations on this for a wide spectrum of rate limiting behaviors.
But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.
I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.
But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.
And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html
This is useless. They want to be able to permanently ban an account that misbehaves - not limit it to misbehaving 16 times a day.
I have sympathy for the desire but that isn't something you actually get through google's surveillance-ware.
You can change the information you put into the hash in my example to get them one go per site per day or one per year or even one per site ever. But without giving cross site linkablity that does you no good or giving google visibility into everyone all the time.
But that still doesn't get you to your desired unevadable bans, but with suitable parameters it can get as close as google's spyware approach while being much more private.
I think time a time oriented rate limit makes the most sense considering the limits in practice (attacker just gets access to another discarded phone, or tricks someone into authenticating for them via theirs)-- basically means the best you can do against dedicate attackers is rate limit them. So why subject honest users who may have good privacy reasons to use multiple accounts over time to worse effective limits than attackers?
But you don't have to agree with that to accept that schemes much more private than google's are possible.
> define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter)
But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use?
Because-- in this hypothetical-- your user agent restricts the usage to the name displayed on the screen and also because your agent won't send the same value twice either (it'll increment the counter or tell you that its run out of tokens).
2 replies →
I'm as biased against cryptocurrency as everyone, but couldn't we have the requestor do a bit of mining work to mint that initial id? I mean, if the service is actually making a bit of money from each request, the need for rate limiting just vanishes, right?
If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.
Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).
As I see it, living requires money. If we have people on this planet that are too poor to digitally prove that they're alive, then we need to figure out a way to distribute the Earth's wealth more equally in general, rather than to require hardware attestation, which seems to be worse on essentially every metric, including inequality.
1 reply →
> If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.
The difference is that if you're human you can create an account and then carry on using it for decades, whereas if you're an aggressive scraper bot or spammer then you get banned and have to buy new accounts over and over.
1 reply →
A least they would give money to something useful.
3 replies →
> I still don't see how you can keep something anonymous and still rate limit it.
Constructions like this exist for many years. E.g. semaphore RLN (rate limiting nullifier). This particular construction was found unfeasible 7 years ago, but since then zksnark tech made huge progress and it is way cheaper now.