← Back to context

Comment by rvnx

20 days ago

If this is a thing then the solution they offer is incorrect. A big giant red screen: “warning the identity of this application developer has not been verified and this could be an application stealing your data, etc” would have worked.

What they want is to get rid of apps like YouTube Vanced that are making them lose money (and other Play Store apps)

  > What they want is to get rid of apps like YouTube Vanced

I think it is also very telling where they're rolling out first. Brazil, Indonesia, Thailand, and Singapore.

It felt weird that the official press release was quoting entities from these countries, as if it should give confidence to the rest of the world. I can't imagine what these countries would want with apps that can be traced back to a government id...

Vanced and such is more of a First World/Western issue. I don't think you're wrong but I got a strong gut feeling there's other pressures in the works. Just something doesn't smell right...

  • Hm, not sure about that. I know from browser add-ons that markets like Brazil do suffer from increased scams, especially banking scams. I could see that this is also an issue for scam apps.

    Firefox for instance does not allow you to install unsigned extensions. You don't need to list them on their storefront, but they want to perform automated tests and have the ability to block extensions through this signing requirement.

    So in principle I can see them wanting to address a legitimate issue, but the way they are going about this is way to centralized. IMO they should do something like we have for web certificates, where vendors can add more root authorities than just the one from Google, and users should be able to add their own root certificates if they want to side load apps.

    •   > I could see that this is also an issue for scam apps.
      

      I don't deny that it can be used to reduce scams, but I think there are far better ways to solve this that don't give authoritarian countries extra powers. Thing is, signing doesn't actually address the problem. It is a way to track the problem, not prevent the problem. Don't confuse the two.

        > Firefox for instance does not allow you to install unsigned extensions.
      

      That's absolutely not true[0]. You need to sign the extension to publish it to their app store but you don't need it to install. Btw, the Playstore already does this too. Which I'm totally okay with!

      [0] https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

        For other people to use your extension, you need ***to package it and submit it to Mozilla*** for signing.

      2 replies →

  • >Vanced and such is more of a First World/Western issue

    What? I'm from Brazil and Vanced is as big, if not bigger here. In fact, most of my 'first world' friends just pay for YouTube Premium (or whatever it is called), and these kinds of workarounds are mostly used in countries with less purchasing power.

    • I'm talking about a different kind of problem. Ask the next question (and maybe a few more) about why this is the situation.

In addition to the other perspectives already offered here, warning screens such as the one you propose were already shown for sideloaded apps, and these screens worked against Google in their lawsuit with Epic Games. So that's another contributing factor for the policy we're discussing.

It won't work because of too many false positives. People are already trained to ignore warnings, like how they blindly accept T&C without reading.

  • If a giant red warning saying 'THIS APP MAY BE MALWARE' doesn't stop someone, then they've either made an informed choice to proceed or it's willful negligence. In other words, users aren't 'trained' to ignore warnings; they're simply being willfully negligent.

    • It’s because on the other side of that warning is a cracked version of Spotify that removes the adverts.

      The user can’t make an informed choice because it’s literally impossible to audit the safety of the app or the author. So they will click passed any warnings, follow any number of steps to install the app that gives them something desirable for free.

      1 reply →

    • As someone who is usually careful I too have found myself clicking past warnings and error notifications in recent times, mainly because I want to do something and the software is actively preventing me from doing that. It isn't negligence, it is just wanting to get something done and not having the time or the nerves to carefully read through and think about messages, dialogs, and screens.

      Back in the early days of the Internet there was the Joel Spolsky article on why users will always do anything to see the dancing bunnies.

    • It doesn’t matter what adjectives you apply to them - they do it and they’ll do it again. Most people are not equipped to evaluate the veracity of that statement, and if a few good apps don’t register with Google (that these will exist is the whole reason this move is problematic at all, right?) and ask you to click through on the website or whatever, they’ll get used to touching the stove and not getting burned.

      c.f. the Windows “it could be malware” blurb. You basically can’t use any software from a small publisher without clicking through it, even if they pay for the code signing certificate.

    • But then you get situations like, "THIS PRODUCT MAY CAUSE CANCER," being cautioned everywhere, with no distinction between, "this is certainly harmful," and "we just haven't verified it isn't harmful".

    • Have you met a human before? Most will simply click past anything that’s impeding their immediate goal.

    • The fact that you don't even realise why that wouldn't work is kind of telling.

      > users aren't being 'trained' to ignore warnings

      Of course they are. Every time they click "continue anyway" and it actually isn't malware (which is 99% of the time) they are being trained that the warning is nonsense.

      And they're right! What use is a warning that an app might be malware, if a) it actually isn't almost every time you see the warning, and b) you have no way of telling if it is or isn't anyway?

      I hate this move too and I don't think they should have done "just make the warning even bigger!" is obviously dumb.

  • There aren't too many false positives, it's just that most modern android software is malware.

    Saying "this will steal your data" is probably correct.

    So what were actually asking users is to install some malware, if it's provided by a big enough tech company, but not other malware. Of course users get confused.

    Just stop downloading apps altogether and run the web views in the original web view - the web browser.

    Will Google, Meta et al. do that and abandon their apps? Of course not, they need to install malware.

  • The way we allow paternalistic tech companies to train the consumer to abdicate personal responsibility is going to bite us in the ass sooner or later. I'm betting on sooner.

  • Then make the false positives lower. The problem is they aren't incentivized to improve such features because, where's the money in that?

  • How about requiring the user to type into a text box "App Foo might be malware. I want to install it anyways."? And disable copy and paste for that box.

  • Maybe they shouldn't offer a "OK" button that the stupid user can blindly click. They could tell you, "this app is dangerous, go to system settings to enabled" and a "Dismiss" button.

    • I'll point to Windows Vista that went all in on this kind of security, even giving you a big warning if you tried to change your background. The computer magazines quickly published guides on how to change a slider or registry setting to reduce the amount of stupid warnings, and the people were quickly trained to ignore and just hit OK on these screens.

      Anyway, Apple already does this with unknown apps downloaded from the internet, you need to go to security settings and hit a button there.

  • This is something laughable that Apple does. Anytime you install something from Github it'll make you click a few extra boxes. And their tightening down of things also ends up making people look for third party software in the first place. All this really does is, like you said, teach people to ignore warnings.

    • That's just their first step. They will remove the extra boxes eventually. They already removed option-click as a workaround.

"Displaying an angry warning message" is one of the tools we've used for decades, and never with much success.

  • So what's wrong with that? You get warned, you ignore the warning and get hacked, that's on you for being dumb enough to download stuff from some shady website. Plus, Android is supposed to have decent isolation and permission controls, unlike desktop OSs like Windows or Linux (not counting Snap/Flatpak) where software can read your entire disk or any arbitrary file and send it via the internet.

    Plus, you are not required to do that, you can just stick to Google Play and trust what Google approves there. But no need to lock down others because of your recklessness.

    • Exactly this. I want a big toggle that I can turn on in developer settings (perhaps make it more involved than that, but you get the gist) that says "I acknowledge that from here on in I am responsible for my data and I hereby absolve Google and other interested parties from responsibility should I blah blah blah..."

      Why the hell can't I use my rooted device for payments? It's my goddamn money at risk.

      1 reply →

    • Is the point of the warning to avoid liability or to actually inform the users? If you tell people everything causes cancer (instead of only saying when you've verified it doesn't), soon enough they're going to stop caring when you say stuff like, "don't eat asbestos, that causes cancer". I think a "checkmark" system makes more sense—for verified accounts/developers, put a checkmark near their name, and for unverified ones, have nothing. There's no reason to cause alarm when 99% of the time the alarm is misfounded.

  • You just have a flawed definition of success.

    By allowing people to shoot themselves in the foot after ignoring a unmistakable warning, you are helping teach the foolish to be more careful in the future. Making mistakes is the best way to learn something.

    • People who just ignore big banners will just tell you that "they have been hacked", as if getting hacked is like a weather phenomenon. They won't even connect them getting hacked with the big red banner.

      If they even notice, that is. It's just as possible that they play open relay for a year before they move to a new phone because their battery is always dying so fast for some unknown reason.

      1 reply →

  • Fuck em. If you ignore a warning, let nature take its course. We don't need to child-proof everyone's home.

I've often lamented at work that we lose freedom at the guise of "security".

Security and Intellectual Property (IP) protection could both be true. Google has a big enough reason to make it happen now.

In a perverse way it's not that protecting Google's IP is making us safer. Yet it, strangely is.

There will always be tangential business aims that are designed to be satisfied at the same time as the consumer benefit.

To be fair though, this strategic duplicity is a technique Apple has used since Jobs; so it's not as if Google used the approach first.

It's such a simple and effective solution that could be implemented overnight and 'help to cut down on bad actors who hide their identity to distribute malware, commit financial fraud, or steal users personal data' tomorrow. Mission accomplished, internet saved, and everyone's happy just like a fairy tale out of the early 2000s.

That was never the real reason. Security and "think of the children" to take away rights are the two oldest plays in the playbook.

Do you like losing money?

  • > Do you like losing money?

    what about us losing control over our own devices? do you like losing control over devices you paid for?

    • People have no "control" over their own device if they have malware on it. The weirdo incoherent tech-chauvinism of "control" and "freedom" evidenced all over this thread is one of the most obnoxious trends on HN.

      10 replies →

You can just use the browser an ublock to browse youtube

  • Let's see for how long this remains true. Every step they get closer to making you watch what they want, instead of what you want, it becomes more likely they will try to even prevent you from viewing videos when you use uBlock Origin.