← Back to context

Comment by mike_hearn

21 days ago

I've worked on such systems. Love it or hate it, remote attestation slaughters abuse. It is just much harder to scale up fraud schemes to profitable levels if you can't easily automate anything. That's why it exists and why banks use it.

Wouldn't device-bound keys for a set of trusted issuing secure elements (e.g. Yubikeys) work just as well but without locking down the whole goddamn software stack?

  • RA schemes don't lock down the whole software stack, just the parts that are needed to allow the server to reason about the behavior of the client. You can still install whatever apps you want, and those apps can do a lot of customization e.g. replace the homescreen, as indeed Android allows today.

    You need to attest at least the kernel, firmware, graphics/input drivers, window management system etc because otherwise actions you think are being taken by the user might be issued by malware. You have to know that the app's onPayClicked() event handler is running because the human owner genuinely clicked it (or an app they authorized to automate for them like an a11y app). To get that assurance requires the OS to enforce app communication and isolation via secure boundaries.

    • That's waay too much locking down, and while it gives me some control, it does not give me real control. I cannot remove or modify software in the software stack whose behavior I disagree with (e.g. all of Play Services). I can't replace the OS with a more privacy and security focused OS like GrapheneOS.

      Imagine if this was done for desktop computers before we had smartphones. That's just crazy.

      Relying on hardware-bound keys is fine, but then the scope of the hardware and software stack that needs to be locked down should be severely limited to dedicated, external hardware tokens. Having to lock down the whole OS and service stack is just bad design, plain and simple, since it prioritizes control over freedom.

1. I don't believe you. This is a measurement problem - you eliminated an avenue to measure abuse, because you are now just assuming abuse doesn't happen because you trust the client.

2. It does not eliminate any meaningful types of fraud. Phishing still works, social engineering still works, stealing TOTP codes still works.

Ultimately I don't need to install a fake app on your phone to steal your money. The vast, vast majority of digital bank fraud is not done this way. The vast majority of fraud happens within real bank apps and real bank websites, in which an unauthorized user has gained account access.

I just steal your password or social engineer your funds or account information.

This also doesn't stop check fraud, wire fraud, or credit card fraud. Again - I don't need a fake bank app to steal your CC. I just send an email to a bad website and you put in your CC - phishing.

  • 1. Well, going into denial about it is your prerogative. But then you shouldn't express bafflement about why this stuff is being used.

    Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved". Fraud and abuse have ground-truth signals in the form of customers getting upset at you because their account got hacked and something bad happened to them.

    2. This stuff is also used to block phishing and it works well for that too. I'd explain how, but you wouldn't believe me.

    You mention check fraud so maybe you're banking with some US bank that has terrible security. Anywhere outside the USA, using a minimally competent bank means:

    • A password isn't enough to get into someone's bank account. Banks don't even use passwords at all. Users must auth by answering a smartcard challenge, or using a keypair stored in a secure element in a smartphone that's been paired with the account via a mailed setup code (usually either PIN or biometric protected).

    • There is no such thing as check fraud.

    • There is no such thing as credit card phishing either. All CC transactions are authorized in real time using push messaging to the paired mobile apps. To steal money from a credit card you have to confuse the user into authorizing the transaction on their phone, which is possible if they don't pay attention to the name of the merchant displayed on screen, but it's not phishing or credential theft.

    • > Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved".

      There is an entire name for this: dark pattern.

      People make this mistake all the time. Its a very common measurement problem, because measuring is actually very hard.

      Are we measuring the right thing? Does it mean what we think it means? Companies spend hundreds of billions trying to answer those questions.

      2. Not it cannot block phishing because if I get your password, I can get in.

      To your points:

      - yes, banks in the US use one time codes too. Very smart of you, unfortunately not very creative. Trivial to circumvent in most cases. Email is the worst, SMS better, TOTP best.

      TOTP doesn't matter if the user just takes their code and inputs it into whatever field.

      - yes there is such a thing as check fraud, you not knowing what it is doesn't matter.

      - if I had to authorize each CC transaction on my phone, I'd put a bullet in my head. That's shit.

      6 replies →

    • Well it is still a phone after all, what with UMA and baseband processing. You don't need to spend much time at Blackhat/Defcon to realize any true attempts at sealing it up are akin to plugging leaks in a sieve with epoxy. Its far too porus.

      Meanwhile if attestation does reduce fraud, the ownability (by the user) of the device is now forfeit due to chasing a dragon's tail.