Comment by coppsilgold
3 days ago
Requiring authorized silicon (and software) isn't even the biggest problem here.
They do not use zero knowledge proof systems or blind signatures. So every time you use your device to attest you leave behind something (the attestation packet) that can be used to link the action to your device. They put on a show about how much they care about your privacy by introducing indirection into the process (static device 'ID' is used to acquire an ephemeral 'ID' from an intermediate server) but it's just a show because you don't know what those intermediary severs are doing: You should assume they log everything.
And this just the remote attestation vector, the DRM 'ID' vector is even worse (no meaningful indirection, every license server has access to your burned-in-silicon static identity). And the Google account vector is what it is.
Using blind signatures for remote attestation has actually been proposed, but no one notable is currently using it: <https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation>
There are several possible reasons for this, the obvious one is that they want to be able to violate your privacy at will or are mandated to have the capability. The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting which may not be good enough for them - an adversary could set up a farm where every device generates $/hour from providing remote attestations to 'malicious' actors.
> The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting
I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.
The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it.
But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).
The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.
Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.
> But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature).
The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.
The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.
4 replies →
Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys.
You can make variations on this for a wide spectrum of rate limiting behaviors.
But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.
I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.
But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.
And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html
This is useless. They want to be able to permanently ban an account that misbehaves - not limit it to misbehaving 16 times a day.
1 reply →
> define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter)
But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use?
3 replies →
I'm as biased against cryptocurrency as everyone, but couldn't we have the requestor do a bit of mining work to mint that initial id? I mean, if the service is actually making a bit of money from each request, the need for rate limiting just vanishes, right?
If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.
Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).
8 replies →
> I still don't see how you can keep something anonymous and still rate limit it.
Constructions like this exist for many years. E.g. semaphore RLN (rate limiting nullifier). This particular construction was found unfeasible 7 years ago, but since then zksnark tech made huge progress and it is way cheaper now.
Can we stop normalizing being surveilled online and on our devices?
Saying something like "the problem is not hardware attestation, but that they don't use ZKP".
You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID.
Hell yes. I was going to post the same comment. I don't give a flying fuck how it's implemented. Remote attestation is inherently evil.
I remember the WEI apologists trying to do the same thing to derail the argument. The problem is the goal, not the details. Just say no: DO NOT WANT!
The biggest problem is banking system. "Don't want - no bank for you". That's the problem.
26 replies →
Remote attestation is a technology, not a policy or a political effort, so it can't be inherently evil. You can disagree with all its known or proposed uses, but then I think it makes more sense to name these.
23 replies →
How should a government act to prohibit misrepresentation of one’s characteristics online, from accessing services for which that government has formally defined regulations based on characteristic into law?
If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens.
If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to.
Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter.
You're not necessarily being surveiled just because you're forced to authenticate yourself. It often is the case practically, but it's not inherent, and mixing the two up makes the discussion too imprecise in a technical forum.
Hardware attestation often also has problems of centralization, but that's something else as well.
By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support.
> It often is the case practically, but it's not inherent
Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.
Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia?
1 reply →
I think labeling this an abstract problem because all the existing implementations as having concrete but different problems is a little bit of a Motte and Bailey fallacy.
The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion.
And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit
> You're not necessarily being surveiled just because you're forced to authenticate yourself.
Oh hell you do! Google profit comes from ADS! It's for their profit to surveil and track and deanonymize TO SELL ADS.
4 replies →
Those in power who need convincing are the same ones pushing for mass surveillance online.
There is a problem where it's becoming increasingly harder to determine which internet packets that are coming to your service are at the behest of a human in the course of normal activities or an automated program.
If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content).
I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in.
That's not a problem at all. It's an artificially created distraction, created to manufacture consent, by those pushing for this shit.
> Requiring authorized silicon (and software) isn't even the biggest problem here. It is indeed the biggest issue. It prevents be from owning and using the hardware I pay for, own, or make myself. It's switching the personal computers as we know it from being open to proprietary and owned by 2 large US corporations.
I don't agree that it's not a problem.
Did you just read “not even the biggest problem” as “not a problem”?
I mean it's THE biggest one.
Would like to read a writeup on this, I was certain it was going to be something like this from the app's announcement.
Also I recall a discussion on Graphene's forums that DRM ID is not only retained there, but stays the same across profiles.
I simplified the process in my description. The DRM ID Android has is not what I was referring to.
I was referring to the static private key that is stored in the silicon. At any time an application can initiate a license request process using DRM APIs which will elicit an unchangeable HWID from your device. The only protection is that it will be encrypted for an authorized license server private key so collusion may be required (intel agencies almost certainly sourced 'authorized' private keys for themselves). Google or Apple also has the option to authorize keys for themselves. In 'theory' all such keys should be stored in "trusted execution environments" on license servers and not divulge client identities for whatever that's worth: <https://tee.fail>.
Citation?
1 reply →
Can you revoke certificate for a specific device using privacy schemes?
Like imagine that someone managed to extract key from the specific device and distributed that key in a software implementation to fake attestation. Now Google needs to revoke that particular key to disallow its usage. This is obvious requirement.
Yes, with blind signatures you still have a central authority which voluntarily 'launders' tokens for you. When you present it your certificate and ask it to give you a blind signature it can reject the certificate.
However if someone extracts a key and keeps it private, and instead gives out unblinded tokens there is nothing you can do other than rate limit - realistically, an adversary is going to trial different rates anyway to figure out which don't make them an outlier.
Especially if the device in question is linked to an enemy of the state and the people.
> Requiring authorized silicon (and software) isn't even the biggest problem here.
I agree, except I worry it's a bigger concern than we realize.
I still remember what CableCard (and the hoops needed for HW manufacturers to get certified) did to the DIY DVR Market...
Ultimately, the point of hardware attestation isn't to ensure that your device is trusted, but that the action you're trying to perform was done by a human, not a bot doing millions of them per second. It's just another CAPTCHA mechanism in disguise, required because bots have gotten so good at solving the existing ones.
With a secure device, the only way to get an attestation for an account signup is to do the signup on that device, with real fingers clicking real buttons on a real screen. There's no way to short-circuit the process by automatically sending a JSON request and bypassing the actual signup flow from a Python script, like you can do with an insecure endpoint.
With blind signatures, a single compromised device destroys the value of the entire scheme, as it can be used to issue an infinite number of attestations with 0 human oversight.
What we need is a blind signature construction where the verifier can revoke a signature, each signature carries proof that none of the revoked signatures comes from the same signer, and where it is impossible for one signer to issue more than n distinct signatures during one time window. Not sure if this would be possible with ZKPs; my cryptography knowledge doesn't extend that far.
> Ultimately, the point of hardware attestation isn't to ensure that your device is trusted, but that the action you're trying to perform was done by a human, not a bot doing millions of them per second. It's just another CAPTCHA mechanism in disguise, required because bots have gotten so good at solving the existing ones.
...no? Maybe this is true of end-user device attestation. But there are other use-cases for attestation.
Server device attestation is an entirely different thing. It's used in e.g. IaaS "Confidential VM" offerings, where the audience for the attestation information is the customer, rather than the server host. It's a very pro-privacy / pro-data-sovereignty feature.
And while embedded device attestation is sometimes about preventing customers from tampering with IoT stuff you "sold" them, more often it's about being able to trust and confidently assert that e.g. the climate sensors you've deployed all over a forest as part of a research project haven't been fucked with to report false data by someone with an agenda. (Or to "apply denial" to your unmanned military satellite downlink station the moment you detect that there's some unknown person out there futzing with it.)
Are these the kinds of issues privacy pass intends to fix? If so, what carrot and/or stick will get it adopted?