Comment by bfirsh
4 months ago
Whenever I read about it, I am surprised at the complexity of iOS security. At the hardware level, kernel level, all the various types of sandboxing.
Is this duct tape over historical architectural decisions that assumed trust? Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
>Is this duct tape over historical architectural decisions that assumed trust?
Yes, it's all making up for flaws in the original Unix security model and the hardware design that C-based system programming encourages.
> Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
Yes, capability architecture, and yes, they exist, but only as academic/hobby exercises so far as I've seen. The big problem is that POSIX requires the Unix model, so if you want to have a fundamentally different model, you lose a lot of software immediately without a POSIX compatibility shim layer -- within which you would still have said problems. It's not that it can't be done, it's just really hard for everyone to walk away from pretty much every existing Unix program.
> seL4 is a fast, secure and formally verified microkernel with fine-grained access control and support for virtual machines.
https://medium.com/@tunacici7/sel4-microkernel-architecture-...
It's missing "the rest of the owl", so to speak, so it's a bit of a stretch to call it an operating system for anything more than research.
Vulnerabilities are inevitable, especially if you want to support broad use cases on a platform. Defense-in-depth is how you respond to this.
iOS is based on MacOS is based on NeXT is a Unix.
It’s been designed with lower user trust since day one, unlike other OSes of the era (consumer Windows, Mac’s classic OS).
Just how much you can trust the user has changed overtime. And of course the device has picked up a lot of a lot of of capabilities and new threats such as always on networking in various forms and the fun of a post Spectre world.
why not do both :)
I think that there's also inherent trust in "hardware security" but as we all know its all just hardcoded software at the end of the day, and complexity will bring bugs more frequently.
Yes, but they're architectural decisions made at Bell Labs in the 70s. iOS was always designed with the assumption that no one is trustworthy[0], not even the owner of the device. So there is a huge mismatch between "70s timesharing OS" and "phone that doesn't believe you when you say 'please run this code'" That being said, most of these security features are not duct-tape over UNIXisms that don't fit Apple's walled garden nonsense. To be clear, iOS has the duct-tape, too, but all that lives in XNU (the normal OS kernel).
SPTM exists to fix a more fundamental problem with OS security: who watches the watchers? Regular processes have their memory accesses constrained by the kernel, but what keeps the kernel from unconstraining itself? The answer is to take the part of the kernel responsible for memory management out of the kernel and put it in some other, higher layer of privilege.
SPRR and GLs are hardware features that exist solely to support SPTM. If you didn't have those, you'd probably need to use ARM EL2 (hypervisor) or EL3 (TrustZone secure monitor / firmware), and also put code signing in the same privilege ring as memory access. You might recognize that as the design of the Xbox 360 hypervisor, which used PowerPC's virtualization capability to get a higher level of privilege than kernel-mode code.
If you want a relatively modern OS that is built to lock out the user from the ground-up, I'd point you to the Nintendo 3DS[1], whose OS (if not the whole system) was codenamed "Horizon". Horizon had a microkernel design where a good chunk of the system was moved to (semi-privileged) user-mode daemons (aka "services"). The Horizon kernel only does three things: time slicing, page table management, and IPC. Even security sensitive stuff like process creation and code signing is handled by services, not the kernel. System permissions are determined by what services you can communicate with, as enforced by an IPC broker that decides whether or not you get certain service ports.
The design of Horizon would have been difficult to crack, if it wasn't for Nintendo making some really bad implementation decisions that made it harder for them to patch bugs. Notably, you could GPU DMA onto the Home Menu's text section and run code that way, and it took Nintendo years to actually move the Home Menu out of the way of GPU DMA. They also attempted to resecure the system with a new bootloader that actually compromised boot chain security and let us run custom FIRMs (e.g. GodMode9) instead of just attacking the application processor kernel. But the underlying idea - separate out the security-relevant stuff from the rest of the system - is really solid, which is why Nintendo is still using the Horizon design (though probably not the implementation) all the way up to the Switch 2.
[0] In practice, Apple has to be trustworthy. Because if you can't trust the person writing the code, why run it?
[1] https://www.reddit.com/r/3dshacks/comments/6iclr8/a_technica...
Security in this context means the intruder is you, and Apple is securing their device so you can't run code on it, without asking Apple for permission first.
That makes no sense for a phone because you go outside with it in your pocket, leave it places, connect to a zillion kinds of networks with it, etc. It's not a PC in an airgapped room. It is very easy for the user of the device to be someone who isn't you.
It can be both.
Any sufficiently secure system is, by design, also secure against it's primary user. In the business world this takes the form of protecting the business from its own employees in addition to outside threats.