← Back to context

Comment by xinayder

3 days ago

Can we stop normalizing being surveilled online and on our devices?

Saying something like "the problem is not hardware attestation, but that they don't use ZKP".

You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID.

Hell yes. I was going to post the same comment. I don't give a flying fuck how it's implemented. Remote attestation is inherently evil.

I remember the WEI apologists trying to do the same thing to derail the argument. The problem is the goal, not the details. Just say no: DO NOT WANT!

  • The biggest problem is banking system. "Don't want - no bank for you". That's the problem.

    • Let them know. Write a letter to the CEO. And vote with your wallet and switch banks if you can. There's always a bank willing to offer you a non-app 2FA scheme.

      25 replies →

  • Remote attestation is a technology, not a policy or a political effort, so it can't be inherently evil. You can disagree with all its known or proposed uses, but then I think it makes more sense to name these.

    • DRM is a technology and is inherently evil. Web attestation is DRM for the web, and is inherently evil. Age ID is a technology and is inherently evil.

      We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad.

      It's not like these technologies were created for the greater good and misappropriated by bad actors. They were proposed by bad actors in the first place, they cannot not be inherently good.

      9 replies →

    • Remote attestation is a policy, not a technology.

      The policy is "I will not let you access this system unless your system software implements this technological protection."

      A camera is technology. A security camera is policy, because it's a camera hooked up to policies on how to watch, record, and respond to what is required, and it is a political effort when connected with laws about face masks, prohibiting spray painting of the cameras, and allowing privacy intrusions.

    • Different technologies may selectively amplify existing power. If the actions that it enables are disproportionately evil, it may at the very least be considered very useful for evil.

      Suppose someone invents a mind-reader that lets the user read the thoughts of anybody else in range. But the mind-reader requires great up-front costs to produce and also allows people with stronger readers to remotely destroy weaker readers, where strength is basically a function of cost.

      In a vacuum, the mind-reader is "just a technology". But it aids autocratic surveillance much more than it aids citizens who want to surveill back. It's "neutral" but its impact is decidedly not.

      TPMs and remote attestation enable entities with power to enforce their existing power much more effectively. In contrast, a general-purpose computer does the opposite because anybody can run whatever code they want, they can adversarially interoperate with anybody they feel like, and so on.

      One of these is more evil than the other, even though they're both "just technologies".

    • I think people are too quick to dismiss the possibility that some technologies are just bad and harmful and we can't shrug off responsibility and say I'm just making a neutral technology and the people using it are the ones causing harm.

How should a government act to prohibit misrepresentation of one’s characteristics online, from accessing services for which that government has formally defined regulations based on characteristic into law?

If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens.

If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to.

Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter.

You're not necessarily being surveiled just because you're forced to authenticate yourself. It often is the case practically, but it's not inherent, and mixing the two up makes the discussion too imprecise in a technical forum.

Hardware attestation often also has problems of centralization, but that's something else as well.

By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support.

  • > It often is the case practically, but it's not inherent

    Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.

    Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia?

    • > Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel.

      Who is?

      > But when the US does it [...]

      I don't live in the US, and while US is often setting global trends, in this case I don't think that's actually that likely, unless it somehow goes significantly better (i.e., the benefits actually vastly exceed the collateral damage to anonymity and resiliency via heterogeneity) than expected.

  • I think labeling this an abstract problem because all the existing implementations as having concrete but different problems is a little bit of a Motte and Bailey fallacy.

    The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion.

    And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit

  • > You're not necessarily being surveiled just because you're forced to authenticate yourself.

    Oh hell you do! Google profit comes from ADS! It's for their profit to surveil and track and deanonymize TO SELL ADS.

    • Having thought about ads, what is the ideal feedback info channel loop from manufacturers to consumers? How best to distribute the information of who can manufacture what at what cost/price and what does it do and when is it appropriate for consumers to receive or pull info from where? And if it ends up being a monopoly of 1 centralized system how do you allow for a competitor to break through without ads?

      2 replies →

    • A counterexample is not a valid refutation of the general point. It can be both true that Google will deanonymize you, given the chance, and that anonymous attestation is possible.

  • Those in power who need convincing are the same ones pushing for mass surveillance online.

There is a problem where it's becoming increasingly harder to determine which internet packets that are coming to your service are at the behest of a human in the course of normal activities or an automated program.

If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content).

I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in.

  • That's not a problem at all. It's an artificially created distraction, created to manufacture consent, by those pushing for this shit.