Comment by const_cast
20 days ago
Play integrity does almost nothing to prevent malicious actors. In fact, id say overall it's probably more harmful because it gives actors like Banks false confidence.
Even with play integrity, you should not trust the client. Devices can still be compromised, there are still phony bank apps, there are still keyloggers, etc.
With the Web, things like banks are sort of forced to design apps that do not rely on client trust. With something like play integrity, they might not be. That's a big problem.
I've worked on such systems. Love it or hate it, remote attestation slaughters abuse. It is just much harder to scale up fraud schemes to profitable levels if you can't easily automate anything. That's why it exists and why banks use it.
Wouldn't device-bound keys for a set of trusted issuing secure elements (e.g. Yubikeys) work just as well but without locking down the whole goddamn software stack?
RA schemes don't lock down the whole software stack, just the parts that are needed to allow the server to reason about the behavior of the client. You can still install whatever apps you want, and those apps can do a lot of customization e.g. replace the homescreen, as indeed Android allows today.
You need to attest at least the kernel, firmware, graphics/input drivers, window management system etc because otherwise actions you think are being taken by the user might be issued by malware. You have to know that the app's onPayClicked() event handler is running because the human owner genuinely clicked it (or an app they authorized to automate for them like an a11y app). To get that assurance requires the OS to enforce app communication and isolation via secure boundaries.
1 reply →
1. I don't believe you. This is a measurement problem - you eliminated an avenue to measure abuse, because you are now just assuming abuse doesn't happen because you trust the client.
2. It does not eliminate any meaningful types of fraud. Phishing still works, social engineering still works, stealing TOTP codes still works.
Ultimately I don't need to install a fake app on your phone to steal your money. The vast, vast majority of digital bank fraud is not done this way. The vast majority of fraud happens within real bank apps and real bank websites, in which an unauthorized user has gained account access.
I just steal your password or social engineer your funds or account information.
This also doesn't stop check fraud, wire fraud, or credit card fraud. Again - I don't need a fake bank app to steal your CC. I just send an email to a bad website and you put in your CC - phishing.
1. Well, going into denial about it is your prerogative. But then you shouldn't express bafflement about why this stuff is being used.
Nobody is making mistakes as dumb as "we fixed something we can measure so the problem is solved". Fraud and abuse have ground-truth signals in the form of customers getting upset at you because their account got hacked and something bad happened to them.
2. This stuff is also used to block phishing and it works well for that too. I'd explain how, but you wouldn't believe me.
You mention check fraud so maybe you're banking with some US bank that has terrible security. Anywhere outside the USA, using a minimally competent bank means:
• A password isn't enough to get into someone's bank account. Banks don't even use passwords at all. Users must auth by answering a smartcard challenge, or using a keypair stored in a secure element in a smartphone that's been paired with the account via a mailed setup code (usually either PIN or biometric protected).
• There is no such thing as check fraud.
• There is no such thing as credit card phishing either. All CC transactions are authorized in real time using push messaging to the paired mobile apps. To steal money from a credit card you have to confuse the user into authorizing the transaction on their phone, which is possible if they don't pay attention to the name of the merchant displayed on screen, but it's not phishing or credential theft.
8 replies →
That’s a “seatbelts so no good because people still die in car crashes” argument with a topping of “actually they’re bad because they give you a false sense of security”
Play integrity hugely reduces brute force and compromised device attacks. Yes, it does not eliminate either, but security is a game of statistics because there is rarely a verifiably perfect solution in complex systems.
For most large public apps, the vast majority of signin attempts are malicious. And the vast majority of successful attacks come from non-attested platforms like desktop web. Attestation is a valuable tool here.
How does device attestation reduce bruteforce? Does the backend not enforce the attempt limits per account? If so, that's would be considered a critical vulnerability. If not, then attestation doesn't serve that purpose.
As for compromised devices, assuming you mean an evil maid, Android already implements secure boot, forcing a complete data wipe when breaking the chain of trust. I think the number of scary warnings is already more than enough to deter a clueless "average user" and there are easier ways to fish the user.
And those apps use MEETS_DEVICE_INTEGRITY rather than MEETS_STRONG_INTEGRITY so a compromised device can absolutely be used to access critical services. (Usually because strong integrity is unsupported on old devices)
This reminds me of providers like Xiaomi making it harder to unlock the bootloader due to phones being sold as new but flashed with a compromised image.
2 replies →
I developed this stuff at Google (JS puzzles that "attest" web browsers), back in 2010 when nobody was working on it at all and the whole idea was viewed as obviously non-workable. But it did work.
Brute force attacks on passwords generally cannot be stopped by any kind of server-side logic anymore, and that became the case more than 15 years ago. Sophisticated server-side rate limiting is necessary in a modern login system but it's not sufficient. The reason is that there are attackers who come pre-armed with lists of hacked or phished passwords and botnets of >1M nodes. So from the server side an attack looks like this: an IP that doesn't appear anywhere in your logs suddenly submits two or three login attempts, against unique accounts that log in from the same region as that IP is in, and the password is correct maybe 25%-75% of the time. Then the IP goes dormant and you never hear from it again. You can't block such behavior without unworkable numbers of false positives, yet in aggregate the botnet can work through maybe a million accounts per day, every day, without end.
What does work is investigating the app doing the logging in. Attackers are often CPU and RAM constrained because the botnet is just a set of tiny HTTP proxies running on hacked IoT devices. The actual compute is happening elsewhere. The ideal situation from an attacker's perspective is a site that is only using server side rate limiting. They write a nice async bot that can have tens of thousands of HTTP requests in flight simultaneously on the developer's desktop which just POSTs some strings to the server to get what they want (money, sending emails, whatever).
Step up the level of device attestation and now it gets much, much harder for them. In the limit they cannot beat the remote attestation scheme, and are forced to buy and rack large numbers of genuine devices and program robotic fingers to poke the screens. As you can see, the step-up from "hacking a script in your apartment in Belarus" to "build a warehouse full of robots" is very large. And because they are using devices controlled by their adversaries at that point, there's lots of new signals available to catch them that they might not be able to fix or know about.
The browser sandbox means you can't push it that far on the web, which is why high value targets like banks require the web app to be paired with a mobile app to log in. But you can still do a lot. Google's websites generate millions of random encrypted programs per second that run inside a little virtual machine implemented in Javascript, which force attackers to use a browser and then look for signs of browser automation. I don't know how well it works these days, but they still use it, and back when I introduced it (20% time project) it worked very well because spammers had never seen anything like it. They didn't know how to beat it and mostly just went off to harass competitors instead.
10 replies →
Its not that type of argument, because seatbelts actually work - play integrity does not.
Play integrity is just DRM. DRM does not prevent the most common types of attack.
If I have your password, I can steal your money. If I have your CC, I can post unauthorized transactions.
Attestation does not prevent anything. How would attestation prevent malicious login attempts? Have you actually sat down and thought this through? It does not, because that is impossible.
The vast, vast VAST majority of exploits and fraud DO NOT come from compromised devices. They come from unauthorized access, which is only surface level naively prevented by DRM solutions.
For example, HBO Max will prevent unauthorized access for DRM purposes in the sense that I cannot watch a movie without logging in. It WILL NOT prevent access if I log in, or anyone else on Earth logs in. Are you seeing the problem?
Cool. So you run a baking website. You get several hundred thousand legit logins a day, maybe ten million that you block. Maybe a hundred million these days.
Now, you have a bucket of mobile users coming to you with attestation signals saying they’ve come from secure boot, and they are using the right credentials.
And you’ve got another bucket saying they’ve are Android but with no attestation, and also using the right credentials.
You know from past experience (very expensive experience) that fraud can happen from attested devices, but it’s about 10,000 times more common from rooted devices.
Do you treat the logins the same? Real customers HATES intrusive security like captchas?
Are you understanding the tech better now? The entire problem and solution space are different from what you think they are.
4 replies →
> That’s a “seatbelts so no good because people still die in car crashes”
Except it's not a seatbelt, it's straitjacket with a seatbelt pattern drawn on it: it restrain the user's freedom in exchange for the illusion of security.
And like a straightjacket, it's imposed without user consent.
The difference with a straightjacket is that there's no doctor involved to determine who really needs it for security against their own weakness and no due process to put boundaries on its use, it's applied to everyone by default.
Great. Let's just require every single computing device to be verified, signed, and attested by a government agency. Just in case it is ever misused to attack a Google online service that cannot be possibly bothered to actually spend one nanosecond thinking on security.
What could possibly go wrong. It's not only morally questionable no matter what "advantages" it provides Google, but it's also technically ridiculous because _even if every single computing device was attested_, by construction I can still trivially find ways to use them to "brute force" Google logins. The technical "advantage" of attestation immediately drops to 0 once it is actually enforced (this is were the seatbelts analogy falls apart).
Next thing I suggest after forcing remote attestation on all devices is tying these device IDs to government-issued personal ID. Let's see how that goes over. And then for the government to send the killing squad once one of these devices is used to attack Google services. That should also improve security.
Here's the dystopian future we're building, folks. Take it or leave it. After all, it statistically improves security!
You just proved the seatbelt analogy.
Yes, for SOME subset of attackers (car crashes), for SOME subset of targets (passengers), the mitigations don’t solve the problem.
This is not the anti-attestation / anti-seatbelt argument many think it is.
All security is mitigation. There is non perfection.
But it makes no sense to say that because a highly motivated attacker with a lot of money to spend can rig real attested devices to be malicious, there must be no benefit to a billion or so legit client devices being attested.
I think your enthusiasm for melodrama and snark may be clouding your judgment of the actual topic.
1 reply →
>After all, it statistically improves security!
Probably not even that, but it limits liability and that’s the only purpose, just like the manual in your car, nobody will ever read it but it contains a warning for every single thing that could happen.
1 reply →