Comment by mike_hearn
21 days ago
I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.
Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.
What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).
Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.
The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.
I may be mis-understanding, but it sounds like this kind of widely distributed attack would also be stoppable by checking how often the account is attempting to log in? And if they're only testing two or three passwords _per account_, per day, then Google could further block them by forcing people not to use the top 10,000 popular passwords in any of the popular lists (including, over time, the passwords provided to Google)?
The attackers only try one or two passwords, that they hacked/phished. They aren't guessing popular passwords, usually they know the correct password for an account and would log in successfully on the first try. There are no server side signals that can be used to rate limit them, especially as the whole attack infrastructure is automated and they have unlimited patience.
Forgive me for being reductive, but aren't these leaked accounts a lost cause? The vulnerability in question is attackers being able to log into user accounts with leaked credentials. The only mitigation for this is to lock out users identified in other password breeches and reconfirm identity out-of-band, like through a local bank branch, add a second factor like a hardware token, or use restrictive heuristics like IP geolocation consistency between visits.
If 3 attempts per hour is enough to gain access, then it doesn't seem attestation can save you. I imagine a physical phone farm will still be economically viable in such case.
1 reply →
> an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts
How is the attacker supposed to bruteforce anything with 2-3 login attempts?
Even if 1M node submitted 10 login attempts per hour, they would just be able to try 7 billion passwords per month per account, that's ridiculously low to bruteforce even moderately secure passwords (let alone that there's definitely something to do on the back end side of things if you see one particular account with 1 million login attempts in a hour from different IPs…).
So I must have misunderstood the threat model…
Brute force here can mean they try millions of accounts and get into maybe a quarter of them on their first try, not that they make millions of tries against a single account.
If you have an attacker that can gain access on 25% of its attempts, it doesn't matter it there is a botnet with millions of IPs, they would still have around 25% success rate on just 10 IPs, it bas nothing to do with brute force, it just means you have plenty of compromised accounts in the wild and you want to prevent bad actors from using them at scale.
The threat model is entirely different from what your brute force phrase implies, and it is also a threat model that isn't relevant to banking, which was the topic of the discussion in the first place. And more importantly, it doesn't affect the security of the user.
That's a very uncommon understanding of brute force, to be honest. Generally I see the term applied to cases where there's next to no prior knowledge, just enumeration.
1 reply →