Comment by lmm
10 years ago
> Once you have a provably correct sandbox (which I think is possible today, if you exclude things like 3D acceleration), you can run whatever you want in it: old software, games, Flash, Windows 98. Application-specific proofs only work for applications written in the approved way.
What would a generic sandbox enforce? That an application never accesses the network? That it never accesses the local filesystem? That it never communicates with another process? Browsers need to do all those things and more. I think you need application-specific knowledge to be able to enforce the restrictions that matter.
Yes, that is a good question. I think OpenBSD's pledge(2) is a good model for what a simple and useful privilege interface can enforce (although there is room for improvement).
To some extent, this is a question of what the requirements are. If a sandbox limits a browser to accessing certain files (a la chroot), is that secure? Or does it need to be more fine grained? This isn't something that can be proven, it's mostly a matter of user interface design.
I think there are good arguments for keeping security requirements relatively simple and coarse (including ease of implementation and making sure users can understand what guarantees are offered).
Hah, pledge is an example I was thinking of. I think it's too ad-hoc to deliver real security; I think it's more of an exploit-mitigation technology (comparable to something like ASLR), and as such ultimately a dead end.
I think it's worth distinguishing two problems with pledge:
1. It's likely to have bugs because it's mixed with a constantly changing kernel and can't be proven correct
2. It isn't fine-grained enough
If pledge were completely bulletproof, but still limited in terms of what it could restrict, would that still be worthless? Granularity is an interesting problem but it's more subtle than typical criticisms of ad-hoc mitigations.
3 replies →