The browser is the worst sandbox ever designed

10 years ago (bentrask.com)

> There are too many changes.

I am sorry, security guys, but unless it's military-grade software, security is just another feature. And it is not a highest priority one. Of course, security would be easier if there were no new features or spec changes. And software development in general would be much easier if there weren't any of those pesky end users.

Which is a good thing, probably. Otherwise, you'd be mostly out of your jobs. Security is always changing, chaotic battlefront.

Having said that, sandboxing is a good idea. Theoretically. But it is really hard to implement right — attack surface on the edges of the boxes is quite large. Remember those Java applets? They were sandboxed neck-deep, with excellent security model. Did it help?

  • The thing that you're missing here is that some of us do work on military-grade software and we still need to use a browser. We need to trust that going to a website won't leak information off of our HDs. I know a guy that builds fighter aircraft displays out of a giant clean room he built in his home. He was writing code for it on the same computer he was using for day to day work because he didn't really know any better (more of an Electrical Engineer than a software dev). My point is that you don't get to use the "it's just another feature" with some of this stuff.

    For Counter Strike, sure. But for things like spreadsheets or web browsers, hundreds of thousands or millions of people working in arms manufacture or intelligence are going to be using your software and it needs to not leak designs to foreign intelligence agencies or competitors.

    • > "The thing that you're missing here is that some of us do work on military-grade software and we still need to use a browser."

      Then use Qubes OS:

      https://www.qubes-os.org/

      You can set up a separate VM in Qubes OS purely for browsing. That way, even if your browser was compromised, it would be isolated from your other applications.

    • Is his house on a Military base? Otherwise, how can he have such sensitive material in his home? That is a much bigger security risk to me it seems ...

      4 replies →

  • > I am sorry, security guys, but unless it's military-grade software, security is just another feature. And it is not a highest priority one.

    I completely disagree: security is the foundation of any software system. Without security, the system simply cannot be trusted to do anything correctly, not even add 1 and 1 together. For far too long we've relied on our systems being accidentally correct rather than deliberately secure; we need to fix that.

    If something's mathematically possible, then it will happen. We need to build systems where security flaws are impossible, because then … they won't happen.

    • > Without security, the system simply cannot be trusted to do anything correctly, not even add 1 and 1 together.

      Not really. For a simple example, imagine a calculator software which has been mathematically proven to work correctly for any number with 30 or less digits, but which overflows a fixed-size buffer if the user inputs a number with more than 30 digits. That software could absolutely be trusted to add 1 and 1 together, while still having a security issue.

  • The idea is to separate security out so that new features and spec changes don't impact it. The necessary features of the sandbox are defined by the hardware, which doesn't change very fast. Everything else can be done inside the sandbox, without worrying about security.

    Java applets are another example of security competing with features. Any part of the runtime could cause an exploit. If the sandbox had been separate it would've been safer.

I mostly disagree. A sandbox like seccomp with a truly minimal set of system calls allowed through (read, write, close, exit) is a tiny attack surface and provides functionality roughly equivalent to raw Nacl. As far as I know, no holes have been found in seccomp configured like that. The problem isn't the sandbox per se, it's the set of services (3D! Audio! USB!) that are allowed through the sandbox. This proposal does nothing to help address that problem.

  • Exactly.

    Every sandboxed app ever has needed native access of some sort to do something useful.

    The best example of this difficulty is probably graphics (WebGL). To provide a compelling user experience, the API must allow apps to upload almost arbitrary code (shaders).

  • The idea is that Rust (or something like it) is genuinely necessary to address it, because the necessary API is complex, as you say.

If we can get the defect rate down to zero, then the product of that and the attack surface will still be zero.

If we can't do that, how much will sandboxing help? Sure, we can make a much smaller surface that we expose - let's say 1% of the size of what the browser currently exposes. Will that be good enough though? Xen's surface is about as small as you can get for a reasonable general-purpose sandbox, and, per the article, Xen has a lot of vulnerabilities too.

I think the only option is to push the defect rate down to zero. (This may be impossible; if so, we're all going to die. Computing power will inevitably advance to the point where any vulnerable system can be cracked by a lone terrorist, and economics and our inability to coordinate ensure that power plants, water treatment facilities, automated bioengineering facilities etc. are going to be computerized.)

Rust is, I think, worthwhile as a step on the road towards provably correct programs - memory management isn't everything, but it's something. Sandboxing OTOH feels like a dead end, because it's inherently ad-hoc and unprincipled.

  • Thanks for this comment. It's the first really substantiative response to my core thesis.

    I don't think the math is on your side. As the defect rate approaches zero, there are diminishing returns to pushing it lower. On the other hand, the attack surface effect becomes overwhelming. Addressing both at once will be far more effective than concentrating on one or the other.

    You might be right that in the long run, the defect rate needs to be 0.0. But that is a long ways away. Once we've picked the low-hanging fruit (including perhaps a provably correct sandbox), then we can start thinking about how to prove the correctness of random applications.

    • > I don't think the math is on your side. As the defect rate approaches zero, there are diminishing returns to pushing it lower. On the other hand, the attack surface effect becomes overwhelming. Addressing both at once will be far more effective than concentrating on one or the other.

      Huh? That's not how it works, is it? If we want to minimize X * Y and we currently have X = 50 and Y = 5, it's much more efficient to focus on bringing Y down.

      > Once we've picked the low-hanging fruit (including perhaps a provably correct sandbox), then we can start thinking about how to prove the correctness of random applications.

      "low-hanging fruit" tends to mean doing unprincipled things that can't be generalized / don't contribute anything in the long term, right? My view is that there's limited value in lowering the rate of security flaws in ways that aren't on the path to bringing it to zero. Getting it to zero will be valuable; halving it isn't really (there's some economic value in reducing the frequency of breaches, but not enough). So I don't think ad-hoc countermeasures are worthwhile.

      To the extent to which a sandbox can be written in a principled/provable way it will be valuable. I'm not at all convinced that a general-purpose sandbox is possible, but that's a different question. (The techniques of factoring a program into pieces with the minimal capabilities that they need are valuable, but I think this needs to be done far more holistically than is possible with a sandbox; the security design needs to reflect the program design, because whether particular operations are safe or not is a function of the application-specific context. But this is very much speculation on my part)

      10 replies →

In fact, sandboxing the browser in a VM (which I think the author suggests, although he wants to go a more light-weight approach) with only limited file system access is what is done by many security-conscious enterprises such as banking. They usually embed Firefox in a Linux VM.

There is "Browser in the Box": http://www.sirrix.com/content/pages/BitBox_en.htm

And then there was also VMWare's Secure Browser Appliance (in 2005! although I cannot find any recent mentions of it): https://rcpmag.com/articles/2005/12/13/vmwares-secure-browse...

Taking this to a new level by implementing a sandbox tailor-made for this purpose might be a worthwhile approach. However, for it to be effective you will always need to inconvenience your users: As soon as the browser running in the sandbox has access to the full filesystem, you are back to where you started. And if the browser does not have full access to the filesystem (but e.g. only to a specific "Downloads" folder as in the current sandboxes) you inconvenience your users: E.g. for uploading files, you first need to copy them to the Downloads folder.

  • > E.g. for uploading files, you first need to copy them to the Downloads folder.

    No, you need a "portal" or "intent" or "capability" whatever you want to call it. Browser asks sandbox to ask user to select a file, and browser gets that file. Android has been able to do this for a while, but full sdcard access is so easy that everyone uses it instead. Flatpak nee xdg-app will do this.

    • This capability system is exactly how OS X's built-in sandbox works. Sandboxed apps don't have unrestricted access to the filesystem, but if they invoke the system-provided Open dialog, and the user selects a file, the application is granted access to that file (which it can persist, so it can continue to access that file in the future).

When I read things like this, I get a little hope that with the end of Moore's law, we will actually start improving the instruction set architecture in such a way that we can write performant software in higher-level languages instead of relying upon assembly++ languages like C, Go, and Rust.

I mean, with the number of VM languages out there like Java, PHP/Hack, .NET, various LISPs, etc. you'd figure that some hardware support for boxing/tagging/GC would be a standard feature by now, but nope. Instead, the best approach we have for secure and performant software with x86 is 1) write complicated system in tedious assembly++ 2) run under VM.

When he asks for a sandbox to provide "a secure drawing API (including 3D, which, yes, is hard). You need a secure file system. You need secure network, and microphone, and webcam, and probably even USB. (It should also be possible to block or control access to each of these.)"

... isn't that the job of a normal OS kernel? The article may call it a "sandbox" but it sounds like normal access control in any OS, not much sandboxing left when asking for all the apis?

  • In-browser applications and virtual machine technology are both poor work-arounds to the same problem: that the design of modern operating systems gets it so badly wrong. If I can run completely different operating systems on top of another inside a VM application, why shouldn't I be able to run those other operating systems as applications? If I can run Javascript downloaded from the internet that is JIT-compiled to native code inside a browser application, why can't I just run native code downloaded from the internet directly (with the same security - such as it is - as Javascript code). Why should I be able to access one set of APIs from Javascript code, and a different set of APIs from native C++ applications, etc.? Why should I be able to set up and migrate a development environment in a VM and transfer it between computers but I can't set up a separate development environment natively in the OS for just one or a few applications and export it and all its dependencies just as easily as exporting a VM? In other words, if what I care about is about 15MB of customized files and 100KB of OS settings customization, why should I have to put 1.2GB of operating system files and other junk into a VM to make it sandboxed and portable? (I'm speaking in this latter example a Linux project called Code Data and Environment which accomplishes something like this by analyzing what files and shared libraries a running application accesses and packaging just those.)

    • This is a good comment. Operating systems were originally designed to protected people from each other. My process can't interfere with your home directory. My process can't mess up the entire OS (unless I'm admin). There was a time when most programs were not actively hostile to the user running them.

      These days just about every application is user-hostile in some way. Even open source Windows applications, depending on where you download them from, might come with a hostile installer. Programs install background tasks. Programs track you.

      Mobile operating systems have been a step in the right direction. But a good operating system should allow us to run whatever binaries we find anywhere on the Internet and not be able to do anything harmful to us.

      1 reply →

https://xkcd.com/1200/ nothing else to say.

  • That is indeed a very incisive critique, but what it and OP and this thread in general suffer from is a fuzziness about threat models.

    Driver protections in the OS prevent people who write popular software from remotely taking over large numbers of computers. They're not intended to protect against your laptop getting physically stolen.

    The OP here, on the other hand, I'm still confused what precise threat model it's concerned about :)

If your battle with invaders happens inside your home, then you have only one more misstep before game over.

"When asked whether it would be prudent to build a defensive wall enclosing the city, Lycurgus answered, "A city is well-fortified which has a wall of men instead of brick."

https://en.wikipedia.org/wiki/Laconic_phrase

"It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle."

https://en.wikiquote.org/wiki/Sun_Tzu

Moral: https://en.wikipedia.org/wiki/Code_signing + https://en.wikipedia.org/wiki/Web_of_trust + https://en.wikipedia.org/wiki/Quality_assurance

Good luck getting Microsoft, Apple, Google and all the different Linux stakeholders into one boat on that :/ At least in the browser world there are standardisation efforts even if they don't work very well in different places, but it's the closest to the "write once run everywhere" for a platform we ever got. And it is the only remaining open platform where I don't need to enter a 'relationship' with a 'platform owner' to write and publish software for it (edit: except Linux of course).

I would like an OS-level sandbox that doesn't treat its users like idiots. But this is not what's currently happening (especially the "don't treat the user like an idiot" part). iOS and Android have always been closed platforms, and OSX and Windows are currently locked down at high speed.

The browser platform might be a mess, but it's less of a mess than the sum of all the underlying operating systems the browser is running on, and the only open-yet-secure platform that exists.

The sandbox written in Rust can only be secure if the libraries it depends on, the compiler that was used to compile it, and the kernel as well as the compiler used to compile it and the libraries it depends on were all written in Rust.

Furthermore, even if applications are sandboxed, that only prevents vulnerabilities from being exploited with other applications. A web page able to compromise my web browser would still be able to get all my browsing history, my usernames, my passwords… The sandbox would not help with that.

This does not necessarily make it worth it to rewrite everything in Rust, but it is worth considering writing any new software in Rust instead of C, especially if this new software is at a low level like compilers, the kernel, and libraries. Other applications can be written in a higher-level language with garbage collection and static typing instead of C.

  • A sandbox like NaCl doesn't depend on the kernel for security. In fact, it shields the kernel, which is good, because common OS kernels tend to have large attack surfaces of their own. The compiler of the sandbox needs to be reliable (so like CompCert, until Rust matures in this regard), but the compiler(s) of the software inside the sandbox don't matter.

    You're right that stuffing all of Chromium into a single sandbox would not be very good, because pages would be able to attack the browser (history, passwords, etc) and each other. You'd want to run each renderer in its own sandbox (which to some extent Chrome already does).

"The classic solution to every combinatorial explosion is modularity: separation of concerns."

What is the basis for this comment? It sounds reasonable, but how would one test it?

  • I've been having a long-running private discussion with the author about this. My anti-modularity stance: http://akkartik.name/post/modularity. However this is a sane use of modularity. My point is mostly that just adding more module boundaries without thought isn't always a net win. So I guess I'd change the statement you quoted to "the right separation of concerns." And the right separation takes trial and error to discover.

    • Well, of course, your separation of concerns is only as useful as the boundary is well thought.

      Modularity is not a magic wand (Brooks' "silver bullets" aren't an strong enough concept for what you are debunking) that makes all the problems of software developing go away.

      2 replies →

  • Complexity grows (a lot) faster than the number of parts. A module with n parts can have O(n^2) pairwise interactions, so the potential for bugs (and debugging costs) grows faster than code size.

    Many features of high level languages are attempts to add modularity and thus break the complexity up into manageable pieces. Functions, classes with "private" features, UNIX-style[1] tools, and other types of modular programming are ways to separate local complexity from the global environment.

    A complex module with many internal parts adds a lot of complexity to a program, but by restricting the interfaced we greatly reduce how much complexity is exposed to the parent environment. A simple interface with only a few parts can hide many more parts that would have otherwise been potential interactions with the global environment (bug/attack surface).

    Even better, well defined interfaces allow the separate modules to be implemented and debugged separately. Which would you rather debug? One 1000-line program or 10 separate 100-line programs? While they are the same amount of code, smaller programs are much easier to understand[2].

    [1] http://www.catb.org/esr/writings/taoup/html/ch01s06.html#id2...

    [2] http://www.catb.org/esr/writings/taoup/html/ch01s06.html#id2...

    TL;DR - "KISS: Keep It Simple, Stupid!"

  • I don't actually have a cite for you, although it comes up in information theory. For example, guessing a password one letter at a time is (dramatically) easier than guessing the whole thing at once. I'd be curious to know what the term for it is.

    Edit: "factoring" is one word for it.

  • Steve McConnell's Code Complete and John Holland's work on technological evolution suggest themselves. Modularity tends to contain complexity generally.

Some points on sandbox design worth mentioning:

* The OS should be the sandbox. It has all the features of a sandbox; they just need to be secure.

* In addition to userland processes, if we trust the compiler of a high-level language without unsafe features (like Java), we should be able to compile programs with it and load them directly into the kernel. (User policy is enforced by the compiler.) This is similar to what Singularity does, and it has a number of advantages. First, since there is no task-switch boundary, we reap a speed benefit. Second is attendant to this; since there is zero cost to switch processes, people are encouraged to separate their applications into multiple processes, encouraging modularity.

* Second, the OS kernel itself should be written in a high-level language.

* Finally we need security in the compiler itself. This is achieved through the Futamura projections. That is, all that needs to written of the compiler is an interpreter; the actual compiler is condensed into the notion of a partial evaluator; the partial evaluator essentially figures out how to substitute any given program into the interpreter efficiently, hence compiling it.

Anyone can help?

He is writing that the current browser sandbox model is not secure–all in a dramatic, clickbaity manner.

After many esoteric lines, then he says (maybe it's the tl;dr)...

"We need a highly secure (ideally provably secure) sandbox that doesn’t have any features! Then, you can run an insecure browser inside, where security doesn’t matter."

Then again some more lines of confusion.

So, what does he suggest? Putting the browser in a VM?

  • Put the browser in a VM/sandbox, yes.

    tl;dr: Right now there is a competition between features and security, and security is losing. By separating them, they wouldn't need to compete, and we could have both. It isn't a good idea for a sandbox to directly handle things like CSS transforms.

    Is my writing really esoteric?

    • Thanks for your reply and apologies for the 'esoteric'. Maybe not the right wording but I read your post few times and I would have liked to get more details about the idea.

      The idea sounds ok once you clarified at the first glance but how does it work, will it work, what would be the implications. And many more questions. Currently I see the core idea surrounded by many vague statements.

      And btw you can do this today already: just start a VM with a browser (might be a bit resource heavy and the integration into the main OS subpar). Or Docker with a browser. Not sure though if latter fulfills your security requirements.

      But at the end, the browser is more then an isolated piece of software in a VM. An integration on OS-level is required and trivial stuff like a full-screen mode is possible but might complicate matters within a VM. Or 3d acceleration and everything where a direct access to the API is required. And suddenly the VM is piping everything through the main OS because a browser just needs access to all the OS APIs and then you end at square 1. So, I also find your idea a bit confusing.

I think that partitioning and sandboxing software is underutilized, and that the Chromium sandbox is well designed.

But I don't know what sandbox escapes for Chromium look like? My guess would be that it involves things like browser plugins, or nasty stuff like WebGL. In which case sandboxing isn't failed, we're just not doing enough of it.

I'm not sure, I understand this correctly, but doesn't this forget that the browser itself also contains sensitive information? So, for example login information would be inside the sandbox, and therefore you still need security in the browser...?

Nice article, though there's a slight historical inaccuracy:

Blink raised the implementation quality bar with it’s tab-per-process design and privilege dropping.

These were Chrome features, not Blink features. Blink didn't even exist when they were developed.

  • They really belong to the layer between Blink and Chrome: the Chromium Content module, the API that projects like node-webkit and Electron consume.

> security risk is the product of defect rate multiplied by attack surface.

Call me crazy, but security sensitive application should not have direct access to 3D acceleration, microphone, camera, USB etc.

  • Not only should it not have direct access to hardware[1], important data such as private keys shouldn't even be directly accessible by any part of the browser. We've known how to keep keys in their own management process for a long time (e.g. pgp-agent, ssh-agent).

    [1] Putting USB anywhere near the web may be the stupidest idea I've ever heard. Attempts to add USB access to the browser should be seen as an attack. A camera or microphone can be a serious security problems, but failures in those features can (at least theoretically) be limited to features related to a specific hardware. Failures related to the USB buss can grant access to a lot of hardware that was never designed for security.

  • See the later quote:

    > (It should also be possible to block or control access to each of these.)

    The good news is that the capabilities of the sandbox don't need to be every-expanding, the way browsers have been. The sandbox should support everything the hardware can do, and then there are policy decisions about what capabilities web pages actually get.

The best things suggested:

- Strip down the features of the application to minimize attack surface. (see bloated, badly designed web apis...)

- Don't let sensitive code be produced by interns.

Rust isn't a sandbox. The whole point of a sandbox is it survives incorrect software at runtime. Rust is compile time magic. Sure, cool, different thing.

I think what he wants is a browser that tie to a Docker container. Does browser on a container make any sense technically?

You can already do a lot with Linux name-spaces. Every layer does help security though, but can slow down performance.