← Back to context

Comment by divegeek

14 hours ago

It's unfriendly to developers and power users, but very friendly to the other 99.999% of users.

I used to work for Google, on Android security, and it's an ongoing philosophical debate: How much risk do you expose typical users to in the name of preserving the rights and capabilities of the tiny base of power users? Both are important but at some point the typical users have to win because there are far, far more of them.

The article implies that this move is security theater. It's not. I wasn't involved in this decision at all, but the security benefit is clear: Rate limiting.

As the article points out, Google already scans all the devices for harmful apps. The problem is knowing what apps to look for. Static analysis can catch them, dynamic analysis with apps running in virtual environments can catch them, researchers can catch them, users can report them... all of these channels are taken advantage of to identify bad apps and Google Play Protect (or whatever it's called these days) can then identify them on user devices and warn the users, but if bad actors can iterate fast enough they can get apps deployed to devices before Google catches on.

So, the intention here is to slow down that iteration. If attackers use the same developer account to produce multiple bad apps, the dev account will get shut down, requiring the attackers to create a new account, registered with a different user identity and confirmed with different government identification documents.

Note that in the short term this will just create an additional arms race. In order to iterate their malware rapidly, attackers will also need to fake government IDs rapidly. This means Google will have to get better at verifying the IDs, including, I expect, getting set up to be able to verify the IDs using government databases. Attackers will probably respond by finding countries where Google can't do that for whatever reason. Google will have to find some mitigation for that, and so on.

So it won't be a perfect solution, but in the real world, especially at Google scale, there are no perfect solutions. It's all about raising the bar, introducing additional barriers to abuse and making the attackers have to work harder and move slower, which will make the existing mechanisms more effective.

It's not even about power users. The article describes this pretty well: It is about the fact that this action will destroy or at least severely harm the open source app ecosystem. What I can see is that this already has a chilling effect on app developers releasing apps on F-Droid. You might say why should I care about that when I am one of the 99 % of normal users. But it all comes down to freedom. If you destroy alternatives to the Play Store, you remove the freedom of choice that even the 99 % of users would have if they were willing to switch to proper open source solutions.

Does anyone know if there is a concrete evidence that bespoke measure violates the EU's digital markets act?

> in the name of preserving the rights and capabilities of the tiny base of power users

These are the rights of all the users. Take that perspective.

Remotely pushing a code to billions of devices to lock their baisc function (running code user loads) unless the device owner pay and provide sensitive info is a full-scale global malware attack by itself.

In that case, an ID-gated play store and a developer settings toggle with a scary warning message would serve the same purpose for that 99.999% while leaving the rest minimally affected. Clearly that's not enough for google.

Completely false dichotomy - you could release a separate android channel that would require flashing through fastboot but still be signed, don't require unlocked bootloader and fully pass "Play Integrity".