← Back to context

Comment by brookst

21 days ago

That’s a “seatbelts so no good because people still die in car crashes” argument with a topping of “actually they’re bad because they give you a false sense of security”

Play integrity hugely reduces brute force and compromised device attacks. Yes, it does not eliminate either, but security is a game of statistics because there is rarely a verifiably perfect solution in complex systems.

For most large public apps, the vast majority of signin attempts are malicious. And the vast majority of successful attacks come from non-attested platforms like desktop web. Attestation is a valuable tool here.

How does device attestation reduce bruteforce? Does the backend not enforce the attempt limits per account? If so, that's would be considered a critical vulnerability. If not, then attestation doesn't serve that purpose.

As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.

  • And those apps use MEETS_DEVICE_INTEGRITY rather than MEETS_STRONG_INTEGRITY so a compromised device can absolutely be used to access critical services. (Usually because strong integrity is unsupported on old devices)

    This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.

    • Maybe a good compromise is to change the boot screen to have a label that the phone is running an unofficial ROM, just like it shows one for unlocked bootloaders? If the system can update that dynamically based on unlock state, why can't it do it based on public keys? Might also discourage vendors/ROM devs from using test keys like Fairphone once did.

      1 reply →

  • I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.

    Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.

    What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).

    Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.

    The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.

    • I may be mis-understanding, but it sounds like this kind of widely distributed attack would also be stoppable by checking how often the account is attempting to log in? And if they're only testing two or three passwords _per account_, per day, then Google could further block them by forcing people not to use the top 10,000 popular passwords in any of the popular lists (including, over time, the passwords provided to Google)?

      4 replies →

    • > an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts

      How is the attacker supposed to bruteforce anything with 2-3 login attempts?

      Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).

      So I must have misunderstood the threat model…

      4 replies →

Its not that type of argument, because seatbelts actually work - play integrity does not.

Play integrity is just DRM. DRM does not prevent the most common types of attack.

If I have your password, I can steal your money. If I have your CC, I can post unauthorized transactions.

Attestation does not prevent anything. How would attestation prevent malicious login attempts? Have you actually sat down and thought this through? It does not, because that is impossible.

The vast, vast VAST majority of exploits and fraud DO NOT come from compromised devices. They come from unauthorized access, which is only surface level naively prevented by DRM solutions.

For example, HBO Max will prevent unauthorized access for DRM purposes in the sense that I cannot watch a movie without logging in. It WILL NOT prevent access if I log in, or anyone else on Earth logs in. Are you seeing the problem?

  • Cool. So you run a baking website. You get several hundred thousand legit logins a day, maybe ten million that you block. Maybe a hundred million these days.

    Now, you have a bucket of mobile users coming to you with attestation signals saying they’ve come from secure boot, and they are using the right credentials.

    And you’ve got another bucket saying they’ve are Android but with no attestation, and also using the right credentials.

    You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.

    Do you treat the logins the same? Real customers HATES intrusive security like captchas?

    Are you understanding the tech better now? The entire problem and solution space are different from what you think they are.

    • > You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.

      1. I don't believe this research - measurement is hard. If we just consider using an unattested device as malicious, as we do now with the play integrity API, then you fudge the numbers.

      2. Even IF the research is true, relative probability is doing the heavy lifting here.

      There's still going to be more malicious attempts from attested devices than those unattested. Why? Because almost everyone is running attested devices. Duh.

      Grandma isnt going to load an unsigned binary on her phones. Let's just be fucking for real for one second here.

      No, she's gonna take a phone call and write a check, or get an email and go to a sketchy website and enter her login credentials and then open the investable 2FA email and then enter the code she got into the website. Guess what - you don't need a rooted device for that. You just don't.

      There are extremely high effort malicious attempts, like trying to remotely rootkit someone's phone, and then low effort ones - like email spam and primitive social engineering.

      You guess which ones you actually see in the wild.

      Is there a real threat here? Sure. But threat modeling matters. For 99.99% of people, their threat model just does not involve unsigned binaries they manually loaded.

      Why are we sacrificing everything to optimize for the 0.01%? When we havent even gotten CLOSE to optimizing the other 99.99%?

      Isn't that fucking stupid? Why yes, yes it is.

> That’s a “seatbelts so no good because people still die in car crashes”

Except it's not a seatbelt, it's straitjacket with a seatbelt pattern drawn on it: it restrain the user's freedom in exchange for the illusion of security.

And like a straightjacket, it's imposed without user consent.

The difference with a straightjacket is that there's no doctor involved to determine who really needs it for security against their own weakness and no due process to put boundaries on its use, it's applied to everyone by default.

Great. Let's just require every single computing device to be verified, signed, and attested by a government agency. Just in case it is ever misused to attack a Google online service that cannot be possibly bothered to actually spend one nanosecond thinking on security.

What could possibly go wrong. It's not only morally questionable no matter what "advantages" it provides Google, but it's also technically ridiculous because _even if every single computing device was attested_, by construction I can still trivially find ways to use them to "brute force" Google logins. The technical "advantage" of attestation immediately drops to 0 once it is actually enforced (this is were the seatbelts analogy falls apart).

Next thing I suggest after forcing remote attestation on all devices is tying these device IDs to government-issued personal ID. Let's see how that goes over. And then for the government to send the killing squad once one of these devices is used to attack Google services. That should also improve security.

Here's the dystopian future we're building, folks. Take it or leave it. After all, it statistically improves security!

  • You just proved the seatbelt analogy.

    Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.

    This is not the anti-attestation / anti-seatbelt argument many think it is.

    All security is mitigation. There is non perfection.

    But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.

    I think your enthusiasm for melodrama and snark may be clouding your judgment of the actual topic.

    • > Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.

      I won't solve the problem for _anyone_ once it is required, because it is trivial to bypass once the incentive is there. This is what kills this technically; it does not even go into the other cons (which really should not be ignored). Seatbelts absolutely do not have this problem.

      > All security is mitigation. There is non perfection.

      This is an absolutely meaningless tautology. It is perfectly true statement. It adds absolutely nothing to the discussion.

      Say I argue in favor "putting a human to verify each and every banking transaction with a phone call to the source and the destination". And then you disagree, saying that there will be costs, waste of time for everyone, and that the security improvement will be minimal at best. And then I counter with "All security is mitigation, there is no perfection!".

      Can you see what you're doing here? This is another textbook example of the politician's fallacy (something must be done; this is something; therefore we must do this).

      It is trying to bypass the discussion on the actual merits of the proposal as well as its cons by saying "well it does something!" . True, it does something. So what? If the con is bad enough, or if the benefit too small, maybe it's best NOT to do it anyway!

      > But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.

      Not long we had right here in HN a discussion about the merits of remote attestion for anti-cheating: turns out the "lot of money" is a custom USB mouse (or addon to one) that costs cents to make. Sure, its not zero. You have to go more and more draconian in order to actually make it "a lot of money", but then you'll tell me I'm being melodramatic.

  • >After all, it statistically improves security!

    Probably not even that, but it limits liability and that’s the only purpose, just like the manual in your car, nobody will ever read it but it contains a warning for every single thing that could happen.

    • I've actually read the manual for my car, and it absolutely does not "contain a warning for every single thing that could happen".