← Back to context

Comment by btrask

10 years ago

Yes, that is a good question. I think OpenBSD's pledge(2) is a good model for what a simple and useful privilege interface can enforce (although there is room for improvement).

To some extent, this is a question of what the requirements are. If a sandbox limits a browser to accessing certain files (a la chroot), is that secure? Or does it need to be more fine grained? This isn't something that can be proven, it's mostly a matter of user interface design.

I think there are good arguments for keeping security requirements relatively simple and coarse (including ease of implementation and making sure users can understand what guarantees are offered).

Hah, pledge is an example I was thinking of. I think it's too ad-hoc to deliver real security; I think it's more of an exploit-mitigation technology (comparable to something like ASLR), and as such ultimately a dead end.

  • I think it's worth distinguishing two problems with pledge:

    1. It's likely to have bugs because it's mixed with a constantly changing kernel and can't be proven correct

    2. It isn't fine-grained enough

    If pledge were completely bulletproof, but still limited in terms of what it could restrict, would that still be worthless? Granularity is an interesting problem but it's more subtle than typical criticisms of ad-hoc mitigations.

    • > If pledge were completely bulletproof, but still limited in terms of what it could restrict, would that still be worthless? Granularity is an interesting problem but it's more subtle than typical criticisms of ad-hoc mitigations.

      I think the idea that you can allow a program to get itself in an attacker-controlled state and that will be ok as long as you blacklist what the program can do is fundamentally unworkable. So yes, I think pledge-like approaches are always going to be worthless in the long term - if the program is under the attacker's control, it will almost certainly have enough access to be able to do damage (with a sufficiently skilled attacker), because every program does something, particularly if we're talking about a large and complicated program like a browser. (You could potentially segregate the browser into multiple processes with distinct responsibilities, but I'm not convinced that helps, because those processes still have to send commands to each other and so an attacker who can subvert one can probably control the others).

      If it had the level of granularity to express restrictions like "should be able to write only files that the local user has chosen via the save dialogue" then it might become an effective security layer, but at that point we're not really talking about a sandbox any more.

      2 replies →