← Back to context

Comment by viktorcode

6 hours ago

> An app should have absolutely no way of knowing what kind of device it’s running on or what changes the user has made to the system.

and therefore the app cannot give a reasonable guarantee that it is not running in an adversarial environment that actively tries to break the app's integrity. Thus, the app cannot be used as a verified ID with governmental level of trust.

There's a difference between needing to lock down the whole OS and just the secure element. The secure hardware component can sign a challenge and prove possession of a private key without you being able to extract it. Smartcards have done this for decades (most people here will know an implementation under the name Yubikey).

Conveying authentic information across untrusted channels (your phone screen, say) has been a solved problem since asymmetric cryptography was invented back before I was born

If your app needs to be protected from harm, it cannot protect the user from said harm. I hoped software engineering culture was lucky to not have the same precepts that make lockpicking a crime in the real world, that we successfully make it into common knowledge that you can't grant any trust to the client, but it seems "trusted computing" is making some of us unlearn that lesson.

> an adversarial environment that actively tries to break the app's integrity

Can you elaborate on what this means? Who is the adversary? What kind of 'integrity'? This sounds like the kind of vague language DRM uses to try to obscure the fact that it sees the users as the enemy. An XBox is 'compromised' when it obeys its owner, not Microsoft.

  • The app is running in a virtual environment intercepting its system calls and designed to patch app‘s memory to fake an ID.