← Back to context

Comment by hnuser123456

6 months ago

And I feel like it undermines any effort to make free, featureful applications if the hardware itself can't be trusted.

You can trust hardware and software that's easy to inspect.

If you can't be sure what's going on and unable to inspect or debug the hardware and software, how can you trust it's doing what you want?

Proprietary hardware and software is already known to work against the interests of the user. Not knowing exactly what's going on is being taken advantage of at large scale.

Let's put it this way: if you can choose between making your own lasagna with a good recipe vs ready-made microwave lasagna. What would you choose? How about your suit? And would you trust an open known to work well pacemaker vs the latest Motorola or Samsung pacemaker? Would you rather verify the device independently or pay up for an SLA?

  • No software is "easy to inspect". Only a tiny fraction of users will ever even try. When things are inspected and problems are found, you need a way to revoke the malicious bits. You'll never notify everyone, which is one of the roles app stores play.

    You trust hardware and software by establishing boundaries. We figured this out long ago with the kernel mode/user mode privilege check and other things. You want apps to be heavily locked down/sandboxed, and you want the OS to enforce it, but every time you do you go up against the principles of open source absolutists like the FSF. "What do you mean my app can't dig into the storage layer and read the raw image files? So what if apps could use that to leak user location data, I need that ability so I can tell if it's a picture of a bird"

    For sensitive information - such as financial transactions - the rewards for bad actors are simply too high to trust any device which has been rooted. The banks - who are generally on the hook if something goes wrong, or at least have to pay a lot of lawyers to get off the hook - are not interested in moral arguments, they want a risk-reduced environment or no app for you - as is their right.

    • > For sensitive information - such as financial transactions - the rewards for bad actors are simply too high to trust any device which has been rooted

      In practice, that just means you trust a Chinese black box Android ROM from a random manufacturer, but not a fresh Lineage OS. To run some banking apps there, one has to root it and install all kinds of crap to hide the fact that your phone is running an OS you actually can trust.

      I don't think it's right, I don't think non-manufacturer provided ROMs are a real danger in practice, or rooted phones, and I think this is all just security theater and an excuse to control what people do on their own devices.

    • > The banks - who are generally on the hook if something goes wrong, or at least have to pay a lot of lawyers to get off the hook - are not interested in moral arguments, they want a risk-reduced environment or no app for you - as is their right.

      If they pay for the phone and ship it to you then I agree. Otherwise, they have an obligation to serve their community (part of their banking charter) and that may include meeting their customers where they are, rather than offering an app with unreasonable usage requirements.

      2 replies →

    • > You trust hardware and software by establishing boundaries. We figured this out long ago with the kernel mode/user mode privilege check and other things. You want apps to be heavily locked down/sandboxed, and you want the OS to enforce it, but every time you do you go up against the principles of open source absolutists like the FSF. "What do you mean my app can't dig into the storage layer and read the raw image files? So what if apps could use that to leak user location data, I need that ability so I can tell if it's a picture of a bird"

      Well, no. The objection isn't to sandboxing apps, but to sandboxing the user, as it were. On my laptop, I run my browser in a sandbox (eg. bubblewrap, though the implementation of choice shifts with time), but as the user I control that sandbox. Likewise, on my phone, I'm still quite happy that my apps have to ask for assorted permissions; it's just that I should be able to give permission to read my photos if I choose.

      2 replies →

    • Not really.

      If their security depends on enslaving the user, their security sucks.

      Real security, be it your financial transactions or keeping your bird pictures safe, doesn't depend on any secret algorithm. Because it's secure.

      3 replies →

  • There’s no way I’d trust open source anyone with my health. And I am not sure there is one open known to work well project, let alone a pacemaker that couldn’t possibly be funded in the open source world. What open source hardware is actually more usable than the closed source alternative for most people?

Should the app builder’s ability to “trust” that the hardware will protect them from the user supersede the user’s ability to be able to trust that the hardware will protect them from the app?

In other words, should the device be responsible to enforcing DRM (and more) against its owner?