Comment by pona-a
21 days ago
How does device attestation reduce bruteforce? Does the backend not enforce the attempt limits per account? If so, that's would be considered a critical vulnerability. If not, then attestation doesn't serve that purpose.
As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.
And those apps use MEETS_DEVICE_INTEGRITY rather than MEETS_STRONG_INTEGRITY so a compromised device can absolutely be used to access critical services. (Usually because strong integrity is unsupported on old devices)
This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.
Maybe a good compromise is to change the boot screen to have a label that the phone is running an unofficial ROM, just like it shows one for unlocked bootloaders? If the system can update that dynamically based on unlock state, why can't it do it based on public keys? Might also discourage vendors/ROM devs from using test keys like Fairphone once did.
Pixels with, for example, GrapheneOS already do exactly that:
"Your device is loading a different operating system."
I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.
Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.
What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).
Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.
The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.
I may be mis-understanding, but it sounds like this kind of widely distributed attack would also be stoppable by checking how often the account is attempting to log in? And if they're only testing two or three passwords _per account_, per day, then Google could further block them by forcing people not to use the top 10,000 popular passwords in any of the popular lists (including, over time, the passwords provided to Google)?
The attackers only try one or two passwords, that they hacked/phished. They aren't guessing popular passwords, usually they know the correct password for an account and would log in successfully on the first try. There are no server side signals that can be used to rate limit them, especially as the whole attack infrastructure is automated and they have unlimited patience.
3 replies →
> an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts
How is the attacker supposed to bruteforce anything with 2-3 login attempts?
Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).
So I must have misunderstood the threat model…
Brute force here can mean they try millions of accounts and get into maybe a quarter of them on their first try, not that they make millions of tries against a single account.
3 replies →