← Back to context

Comment by wzyboy

19 days ago

It's a genius idea to run the process in a isolated network namespace!

I'm more interested in the HTTPS part. I see that it sets some common environment variables [1] to instruct the program to use the CA bundle in the temporary directory. This seems to pose a similar issue like all the variants of `http_proxy`: the program may simply choose to ignore the variable.

I see it also mounts an overlay fs for `/etc/resolv.conf` [2]. Does it help if httptap mounts `/etc/ca-certificates` directory with the temporary CA bundle?

[1] https://github.com/monasticacademy/httptap/blob/cb92ee3acfb2...

[2] https://github.com/monasticacademy/httptap/blob/cb92ee3acfb2...

Thanks! But yep I agree, you're exactly right, it's ultimately... frustrating that there isn't really an agreed-upon or system-enforced way to specify CA roots to an arbitrary process.

It's true that httptap mounts an overlay on /etc/resolv.conf. This is, as you'd expect, due to the also-sort-of-frustrating situation with respect to DNS resolution in which, like CA roots, there isn't a truly reliable way to tell an arbitrary process what DNS server to use, but /etc/resolv.conf is a pretty good bet. As soon as you put a process into a network namespace you have to provide it with DNS resolution because it can no longer access localhost:53, which is the systemd resolver, which is the most common setup now on desktop linux systems.

I do think it might help to mount /etc/ca-certificates as an overlay. When I started looking into the structure of that directory I was kind of dismayed... it's incredibly inconsistent from one distro to the next. Still, it's doable. Interested in any knowledge you might be able to share about how to add a cert to that directory in a way that would be picked up by at least some TLS implementations.

  • It's a bit thin solution though, isn't it? As you say, it's dependent on both specific CA store and resolver behaviour. It's probably going to be robust enough on the most common SSL libraries, such as OpenSSL. But if we're going that route, why not just run the software against a patched SSL library which dumps the traffic?

    That also doesn't require any elevated privileges (as opposed to other methods of syscall interception) and is likely much easier to do. It has the added benefit of being robust against applications either pinning certificates outright or just being particular about serial numbers, client certificates, and anything like that.

  • What if instead you bound your own DNS server to localhost:53 inside the network namespace? I suppose you'd still have to mess with /etc/resolv.conf in case it points to hardcoded public resolvers instead like mine does.

IMO there's no general solution to the HTTPS part that will work for all kinds of programs and the long tail of certificate pinning implementations.

As a proof by counterexample, imagine malware that uses TLS for communication and goes to great lengths to obfuscate its compiled code. It could be a program that bundles a fixed set of CA certificates into its binary and never open any files on the filesystem. It can still create valid, secure TLS connections (at least for ~10 years or so, until most root CA certificates expire). TLS is all userspace and there's no guarantee that it uses OpenSSL (or any other common library), so you can't rely on hooking into specific OpenSSL functions either. If the server uses a self-signed certificate and the client accepts it for whatever reason, it's worse.

With that said, it's definitely possible to handle 99% of the cases reliably with some work. That's better than nothing.