← Back to context

Comment by JamesAdir

3 days ago

Sorry for the noob question, but how can Docker help remediate the situation? I'm currently learning about DevOps.

It can't easily, Docker should not be naively treated as a security solution. It's very easy to misconfigure it:

- The Docker daemon runs as root: any user in the docker group effectively also has sudo (--privileged)

- Ports exposed by Docker punch through the firewall

- In general, you can break the security boundary towards root (not your user!) by mounting the wrong things, setting the wrong flags etc.

What Docker primarily gives you is a stupid (good!) solution for having a reproducible, re-settable environment. But containers (read: magic isolated box) are not really a good tool to reason about security in Linux imo.

If you are a beginner, instead make sure you don't run services as the sudo-capable/root user as a first step. Then, I would recommend you look into Systemd services: you can configure all the Linux sandboxing features Docker uses and more. This composes well with Podman, which gives you a reproducible environment (drop-in replacement for Docker) but contained to an unprivileged user.

  • I agree with what you wrote, and add that you should make sure that your service's executables and scripts also should not be owned by the user they run as.

    It's unfortunately very common to install, for example, a project as the "ubuntu" user and also run it as the "ubuntu" user. But this arrangement effectively turns any kind of file-overwrite vulnerability into a remote-execution vulnerability.

    Owning executables as root:root, perms 0755, and running as a separate unprivileged user, is a standard approach.

  • > - Ports exposed by Docker punch through the firewall

    I've been using ufw-docker [1] to force ufw and docker to cooperate. Without it, Docker ports do actually get exposed to to the Internet. As far as I can tell, it does its job correctly. Is there another problem I am not aware of?

    [1] https://github.com/chaifeng/ufw-docker

Docker is not really a security boundary (unless you use something like gVisor), so it's a bit of a red herring here.

The idea is to make your app immutable and store all state in the DB. Then, with every deployment, you throw away the VM running the old version of your app and replace it with a new VM running the new version. If the VM running the old app somehow got compromised, the new VM will (hopefully) not be compromised anymore. In this regard, this approach is less vulnerable than just reusing the old VM.

Containers allow separation of access rights, because you don't have to pwn only one program/service that is running on the host system to get physical access to it.

Containers have essentially 3 advantages:

- Restart the containers after they got pwned, takes less than a second to get your business up and running again.

- Separation of concerns: database, reverse proxy, and web service run in separate containers to spread the risk, meaning that an attacker now has to successfully exploit X of the containers to have the same kind of capabilities.

- Updates in containers are much easier to deploy than on host systems (or VPSes).

  • > Separation of concerns

    Sorta: yes the container is immutable and can be restarted, but when it does, it has the same privs and creds to phone up the same DB again or mount the same filesystem again. I'd argue touching the data is always the problem you're concerned about. If you can get an exec in that container you can own its data.

    • Why do you think ISOs never really took off? I feel like they solve so many issues but only ever see folks reach for containers.

      2 replies →

  • Just thinking about this from a proxmox pov -- applying this advice, do you see an issue with then saying: take a copy of all "final" VMs, delete the VM and clone the copy?

    And, either way, do you have a thought on whether you'd still prefer a docker approach?

    I have some on-prem "private cloud"-style severs with proxmox, and just curious about thinking through this advice.

  • There's already unix permissions and regular namespaces. Docker is very hard to secure.

Not OP, but Im assuming its because of immutability of the containers where you can redeploy from a prebuilt image very quickly. There is nothing that says you cant do the same with servers / VMs however the deployment methodology for docker is a lot quicker (in most cases).

Edit: Im aware its not truly immutable (read only) but you can reset your environment very easy and patching also becomes easier.

It can't. Also there's nothing inherently wrong with ssh password auth.

  • You might want to back those statements up.

    • Not parent, but see my sibling comment re: Docker. The issue is imo that Docker is very easy to misconfigure and gives you the wrong mental model of how security on Linux works.

      On SSH password auth: its secure if you use a long, random, not reused elsewhere password for every user. But it is also very easy to not do these things. SSH certs are just more convenient imo.

    • Using docker does not help in this specific case - if the attackers came via ssh, they will have root access as before, and if they come in through the application, they still control your application inside the container and can make it serve what they want.

      For ssh, the problem does not lie within password auth itself, but with weak passwords. A good password is more secure than a keypair on a machine whose files you can't keep private.