Comment by nijave

4 days ago

I was always under the impression security was a red herring and the real reason was control. Google wants to own the device and rent it to users with revocable terms the same way SaaS subscription software works. Locking down what can run is a key step in that process

I worked at a bank on the backend for architecture and security.. and I've posted this attestation here before, but the sheer volume of fraud and fraud attempts in the whole network is astonishing. Our device fingerprinting and no-jailbreak-rules weren't even close to an attempt at control. It was defense, based on network volume and hard losses.

Should we ever suffer a significant loss of customer identity data and/or funds, that risk was considered an existential threat for our customers and our institution.

I'm not coming to Google's defense, but fraud is a big, heavy, violent force in critical infrastructure.

And our phones are a compelling surface area for attacks and identity thefts.

  • I wish we had technical solutions that offered both. For example, a kernel like SeL4, which could directly run sandboxed applications, like banking apps. Apps run in this way could prove they are running in a sandbox.

    Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.

    Its technically possible at the device level. The hard part seems to be UX. Do you show trusted and untrusted apps alongside one another? How do you teach users the difference?

    My piano teacher was recently scammed. The attackers took all the money in her bank account. As far as I could tell, they did it by convincing her to install some android app on her phone and then grant that app accessibility permissions. That let the app remotely control other apps. They they simply swapped over to her banking app and transferred all the money out. Its tricky, because obviously we want 3rd party accessibility applications. But if those permissions allow applications to escape their sandbox, and its trouble.

    (She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)

    • > (She contacted the bank and the police, and they managed to reverse the transactions and get her her money back. But she was a mess for a few days.)

      And this almost certainly means that the bank took a fraud-related monetary loss, because the regulatory framework that governs banks makes it difficult for them to refuse to return their customer's money on the grounds that it was actually your piano teacher's fault for being stupid with her bank app on her smartphone (also, even if it were legal to do so, doing this regularly would create a lot of bad press for the bank). And they're unlikely to recover the losses from the actual scammers.

      Fraud losses are something that banks track internally and attempt to minimize when possible and when it doesn't trade-off against other goals they have, such as maintaining regulatory compliance or costing more money than the fraud does. This means that banks - really, any regulated financial institution at all that has a smartphone app - have a financial incentive to encourage Apple and Google to build functionality into their mass-market smartphone OSs that locks them down and makes it harder for attackers to scam ordinary, unsophisticated customers in this way. They have zero incentive to lobby to make smartphone platforms more open. And there's a lot more technically-unsophisticated users like your piano teacher than there are free-software-enthusiasts who care about their smartphone OS provider not locking down the OS.

      I think this is a bad thing, but then I'm personally a free-software-enthusiast, not a technically-unsophisticated smartphone user.

      4 replies →

    • > . For example, a kernel like SeL4, which could directly run sandboxed applications, like banking apps. Apps run in this way could prove they are running in a sandbox. ... Then also allow the kernel to run linux as a process, and run whatever you like there, however you want.

      This won't work. It's turtles all the way down and it will just end up back where we are now.

      More software will demand installation in the sandboxed enclave. Outside the enclave the owner of the device would be able to exert control over the software. The software makers don't want the device owners exerting control of the software (for 'security', or anti-copyright infringement, or preventing advertising avoidance). The end user is the adversary as much as the scammer, if not more.

      The problem at the root of this is the "right" some (entitled) developers / companies believe they have to control how end users run "their" software on devices that belongs to the end users. If a developer wants that kind of control of the "experience" the software should run on a computer they own, simply using the end user's device as "dumb terminal".

      Those economics aren't as good, though. They'd have to pay for all their compute / storage / bandwidth, versus just using the end user's. So much cheaper to treat other people's devices like they're your own.

      It's the same "privatize gains, socialize losses" story that's at the root of so many problems.

      5 replies →

    • The banking app has no password or 2FA prompt before opening? Even if the 2FA has to come from Authenticator/Authy/etc , that should atleast have a password on it. What am I missing here?

    • The problem is it's quite easy to poke holes in a sandbox when you're outside the sandbox looking in, especially when the user is granting you special permissions they don't understand. These apps aren't doing things like manipulating the heap of the banking app, they are instead just taking advantage of useful but powerful features like screen mirroring to read what the app is rendering.

    • Yes, sandboxing is a technological protection, but once you have important data flowing we often don't have technological protections to prevent exfiltration and abuse. The global nature of the internet means that someone who publishes an app which abuses user expectations (e.g. uses accessibility to provide command and control to attackers) is often out of legal reach.

      You also have so much grey area where things aren't actual illegal, such as gathering a massive amount of information on adults in the US via third party cookies and ubiquitous third party javascript.

      Thats why platforms created in the internet age are much more opinionated on what API they provide to apps, much more stringent on sandboxing, and try to push software installation onto app stores which can restrict apps based on business policy, to go beyond technological and legal limitations.

    • > As far as I could tell, they did it by convincing her to install some android app on her phone and then grant that app accessibility permissions.

      Did she make it through the non-google play app install flow?

      1 reply →

  • Then don't issue an app. Issue people cards to pay with and let them come to the bank for weird transactions.

    • You can even use the chip on the card together with some cheap HW device to authorize the transactions made with the app. This actually exists [1] for quite some time but seems to be mostly limited to Germany. But this and the use of other HW tokens systems is on decline. Banks increasingly use apps now, increasingly without any meaningful second factor, not even offering better options. They want this and are fully to blame.

      [1] https://en.wikipedia.org/wiki/Transaction_authentication_num... (This is a bit outdated, nowadays it works via QR codes instead of those flickering barcodes but the concept stays the same)

    • This 100%. I don't understand why everything needs to be an app nowadays. Some things are best done in person and without to technology. No, I won't install some shitty app that requests location and network access to order lunch. If a venue does not provide a paper menu and accept cash, they have just lost my custom.

      1 reply →

  • Yeah, I worked at a bank once. I was told following policy and using dependencies with known vulnerabilities so my ass was covered was more important than actually making sure things were secure (it was someone else's problem to get that update through the layers of approval!). Needless to say, I didn't last long

  • Do you allow customers to log in to their account with a web browser on a windows machine?

    • Web browsers are secure fingerprinted as well, on a sliding scale of access requests, from login to "initiate a wire transfer for $1M".

  • How does preventing people from running software of their choice on their own device (what you call jailbreaking) prevent fraud in practice? It's a pretty strong claim you're making there. And it's being made frequently by institutions, yet I have never seen it actually explained and backed up with any real security model.

    All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.

    How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?

    • The app does fingerprinting and requires certain secure device profile characteristics before the app lets a user initiate certain kinds of financial transactions.

      Those are based on APIs available from the mobile devices. Google and Apple can offer other means by which to secure these things, and to validate that the device hasn't been cracked and is submitting false attestations. But even a significant financial institution has no relationship with Apple on the dev side of things.. Apple does what it decides to do and the financial institution builds to what is available.

      These controls work -- over time fraud and risk go down.

      2 replies →

What would happen to a normal person's phone when Google decided to revoke their Google account? Will the phone still function? Or is it "just" a matter of creating another Google account?