Comment by madeforhnyo
9 days ago
A collegue of mine was tech lead at a large online bank. For the mobile app, the first and foremost threat that security auditors would find was "The app runs on a rooted phone!!!". Security theater at its finest, checkboxes gotta be checked. The irony is that the devs were using rooted phones for QA and debugging.
Meanwhile, it's probably A-OK for the app to run on a phone that hasn't received security updates for 5 years.
I don't get it. If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
I'm guessing it's because there are a lot of phones floating around that aren't updated (probably far more than are rooted), and they're willing to pretend to be secure when it impacts a small number of users but not willing to pretend to be secure when it impacts many users.
Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years. For example, a malicious OS maker could add their own certificate to the root store, essentially allowing them to MitM all the traffic you send to the bank.
Liability works on the principle that "if it's good enough for Google, it's good enough for me." A bank cannot realistically vet every vendor, so they rely on the OS maker to do the heavy lifting.
Even if they wanted to trust a third-party OS, they would need to review them on a case-by-case basis. A hobbyist OS compiled by a random volunteer would almost certainly be rejected.
I can add certificates on my unrooted android. That how HTTPToolkit [0] works, it only requires adb, which (thankfully) doesn't trip banking apps. Banking apps can (and do iirc) pin certificates, so a rooted phone adds no risk whatsoever.
Also in my experience a rooted phone experience is by far more secure than the OEM androids. Security is supposed to assess risk objectively, yet "running on a Xiaomi phone with 3rd party apps that cannot be uninstalled and have system access" is somehow more secure than "running on a signed LineageOS where user can edit hosts file".
[0] https://httptoolkit.com/
>Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
That's just straight-up false ; the phone without security updates has known exploits its user knows nothing about (and certainly not how to avoid them). The phone with an unknown OS has a user capable of installing said OS, at the very least.
> Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
I'm not convinced this is generally true, at least as can be detected by an app. Back when I had my phone rooted, it was configured so that it would pass all the Google checks and look like the stock OS. That configuration was probably dangerous, but apps were happy with it. Now that I run an OS that doesn't lie about what it is, I'm flagged as untrustworthy. What's the point in being honest?
Overall, I don't think they really have any idea what's a threat based on the checks they're doing, so I don't think they can say at all what's more or less trustworthy. But I think that a phone that reports being years out of date should reasonably not be expected to be secure, but yet they mark it as secure anyway. Many of those devices can be rooted in a way that can still pass their checks. I would think, if nothing else, that would be reason to block them, since they're interested in blocking rooted devices.
> If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
Google doesn't provide an API or data set to figure out what the current security patch level is for any particular device. Officially, OEMs can now be 4 months out-of-date, and user updates lag behind that.
Your guess is good, but misses the point. Banks are worried about a couple things with mobile clients: credential stealing and application spoofing. As a consequence, the banks want to ensure that the thing connecting to their client API is an unmodified first-party application. The only way to accomplish this with any sort of confidence is to use hardware attestation, which requires a secure chain-of-trust from the hardware TEE/TPM, to the bootloader, to the system OS, and finally to your application.
So you need a way for security people working for banks to feel confident that it's the bank's code which is operating on the user's behalf to do things like transfer money. They care less about exploits for unsupported devices, and it's inconvenient to users if they can't make payments from their five-year-old device.
And this is why Web Environment Integrity and friends should never be allowed to exist, because Android is the perfect cautionary tale of what banks will do with trusted-computing features: which is, the laziest possible thing that technically works, and keeps their support phone lines open.
All good points. Thanks for that!
I'm not an Android developer, but I was thinking they could use something like the android.os.Build.VERSION.SECURITY_PATCH call to get the security patch level. Maybe that's not sufficient for that purpose, though.
1 reply →
There's definitely some way of telling, Enterprises can block sign in with no recent updates in Microsoft authenticator or whatever app they use.
[dead]
It's more frustrating because my partner's pixel 4A cannot use google pay or the bank apps because it is an invalid os - I am guessing due to lack of updates? So, perfectly fine hardware, but crippled in functionality due to the lack of software updates.
My partner has a 4a and no such issues. Are you talking about stock Android or something else,
1 reply →
ive seen: -"but ios can be jailbroken and it doesnt have an AV!" while the MDM does not allow jailbroken devices, and they also allowed sudo on linux.
auditors are clueless parasites as far as im concerned. the whole thing is always a charade where the compliance team, who barely knows any better tries to lie to yhe auditor, and the auditor pick random items they dont understand anyway. waste of time, money and humans.
at best it's "cover your ass security" so when you do get pwned you can say you went through an "accrediting auditor" - blah blah blah.
Agreed on everything you said. Just wish there was a more efficient way to do things :/
Yep, some stakeholder wants a pen-test or an audit so you do it and address the findings to keep them happy. Going through it now at work - bunch of silly findings because the pen testers know they don't get paid to send back an empty report and tell you everything is fine.
[dead]
As long as copying some numbers, printed on a piece of plastic, into an online order form is all the authentication that is needed for a transaction, anything more than that is inherently security theater.
That’s why for most transactions I do with a credit card in my country, you need an extra validation with the mobile app. It is mostly American websites that do not enable this functionality.
Yes, because we don't want these stupid locked down apps. Credit cards give buyers many protections, it's very easy to dispute an illegitimate transaction.
4 replies →
Because we have anti-fraud consumer potection rules and CCs operate on a make money first type of bais. The debit networks on the otherhand are a different story.
Yeah that's the first thing a pentest will complain about, had the same problem too. I pushed back enough so that it's trivial to bypass but the bank and pentesters also agreed with me that it's security theater or else I would never had the chance.
I always ask them if they have root/admin on their computer. Then follow up playing dumb with "shouldn't we lock out PCs too?". Watching them stammer is worth the 30 second aside.
> Then follow up playing dumb with "shouldn't we lock out PCs too?".
Unfortunately, some banks do, for various functionality; there are many things you can do via bank apps and not typically via their website.
Locking down PCs is easy: just set a random password.
1 reply →
[dead]
Who do we lobby to get this removed from the auditors checklists? This is a solvable problem but it’s political. And if we don’t solve it personal computing is at risk.
Start by calling (or visiting the area office of) your senator and congressman. If you are reasonably articulate, they engage and listen. Doesn't matter if the listener is not a techie; they will ask questions around policy and why it affects constituents.
This is 1000x more useful than online petitions or other passive stuff. Politicians know that one person to have taken the effort to do this, means 1000 others are feeling the same thing but are quiet.
From my experience with the fed level senator.. they're already lobbied to shit. For example, explaining to Duckworth that fed level id tying to your internet travel and encryption backdoors aren't safe.. they'll send you copy that she really wants you to know she's thinking about the children while rolling around in her wheelchair.
This is nothing to do with politicians.
A lot of that is security theater at its best. However given the forced attack surface I would imagine that there is a hard push from authoritarians and the finance world to make a "secure chain" from service to screen.
My guess: They're afraid that the scammers are going to mirror the screen and remote control access to the app. (More orgs are moving to app/phone based assumptions because it saves the org money and pushes cost on the consumer) Instead of providing protections from account take over.. we're going to get devices we don't own and we have to to pay for, maintain and pay for services to get a terminal to your own bank account. Additionally, there are many dictatorships, like the UK, North Korea, etc, that are very adimate that you don't look at things without their permission. So they're trying to close the gap of avoiding age verification bypasses with VPNs.
> the first and foremost threat that security auditors would find was "The app runs on a rooted phone!!!".
GrapheneOS is not rooted, or is not required to be.
Moreso, the project advises against rooting your phone and tells you that if you install GrapheneOS and root it that you aren't running GrapheneOS anymore.
No it's not, but it's bundled in the same basket. "Didn't pass DEVICE_INTEGRITY -> rooted"
Yep that's my experience as well, if you don't get the play protect™ absolution your device is seen as rooted. Latest app to display this BS behavior was PagerDuty, I guess they have to protect their secret sauce of calling an API and showing notifications
1 reply →
But grapheneos doesn't need to be rooted!
Unfortunately, root detection is greatly flawed, most of the time.
Oh how I fucking wish "security" wasn't a stupid cargo cult checkbox list 3/4 of the times.
Unfortunately, the rot runs too deep.
Your password must be between 8 and 12 characters, and must have lowercase, uppercase, numbers, and punctuation.
Pick up the can!
My favorite is when it must have punctuation, but certain punctuation is silently banned, so I have to keep refreshing my password generator until it gives me an acceptable combination.
14 replies →
Having more than just alphanumeric characters widens the domain of the password hash function, and this directly increases the difficulty of brute-force cracking. But having a such a small maximum password length is... puzzling, to say the least. I would accept passwords of up to 1 KiB in length.
With rainbow tables, even 11-character simple passwords like 'password123' can be trivially cracked, and as the number of password leaks show, not everyone is great at managing secrets and credentials.
21 replies →
Haha having such a low range of max chars just makes it that much easier to brute force doesn't it?
On password length, I once had an account on Aetna that let me put whatever I want for my password, so I used a three-word passphrase that bitwarden generated for me. It ended up being like 20 chars.
Then I tried to log in with that password. Whooosies, the password input only allowed max 16 chars!
Ended up using a much less secure password because of this.
1 reply →
> Pick up the can!
Gotta admit, this triggered me. I don’t think those are the same thing. If no one had a good password we wouldn’t affect each other negatively. If no one picked up trash, we would.
Edit: Sorry folks, didn’t get the reference.
3 replies →