Comment by alex_duf

6 years ago

I know very little about the topic so bearing that in mind:

We're already in a world were we can't quite trust our CPUs, so why trusting baseband chips?

If it does make the design more complicated, it may also reduce the potential attack surface.

We can't fully trust the correctness of modern complicated CPU designs, leading to problems like <insert all speculative bypasses that have affected Intel CPUs the past 2 years>. But despite their complexity, CPUs and the CPU part of a smartphone SoC are usually extremely well understood (relatively speaking). The reason is that you actually need to run your software on these CPUs, so they need to be understood rather well. With better understanding comes better trust.

On the other hand, the baseband processor is mostly unknown, black box hardware, running unknown black box software, that completely controls the transmission of cellular data. Of course it would be horrible if there was no separation between the CPU and baseband. You shouldn't trust that setup. But as it turns out, separation does exist!

  • > But as it turns out, separation does exist!

    The article you linked to says: "There can be an IOMMU with very tight restrictions providing proper isolation or a setup where the IOMMU is effectively not doing anything and permits access to all of the memory. Determining that requires real research."

    So it sounds more like separation might or might not exist and you're not likely to find out if it does on your particular device.

> If it does make the design more complicated, it may also reduce the potential attack surface.

an increase in complexity would rule out reduction of attack surface. in fact attack surface would be guaranteed to increase

  • Well, that isn't generally true if the complexity is actually a security boundary. After all, all security designs are based on layers -- it's hard to add a layer of security without adding complexity.

    As a counter-example -- removing all of Linux's privilege checking would make the code a lot less complicated, but the attack surface would increase a million-fold. In this case, the Librem 5's separation of the baseband such that communication is done over USB (a protocol which doesn't have DMA) is a security improvement over giving the baseband DMA access.

    • USB protocols are often times handled in SW, some in Linux kernel, some in userspace. So if someone discovers RCE over USB in Linux USB stack, modem will have direct memory access, or even RCE on the main CPU with kernel privileges.

      I have no experience with PCIe so maybe it's harder with USB to abuse the host system, than with PCIe these days.

      You can think of USB as being similar to using a TCP/IP protocol between multiple machines capable of executing code, and having to execute code to handle higher level protocols, like HTTP or whatnot. If there's a code execution bug anywhere, the USB capable device will be able to exploit it.

      And by default, there's a code-execution bug on all normally configured Linux machines. If you'll not create a USB "firewall", modem can just create a virtual keyboard and kernel will happily accept all input from it, for example. So modem can just type whatever it wants to your shell. It will be obvious, but, it's still device->host RCE.

      1 reply →

    • > Well, that isn't generally true if the complexity is actually a security boundary.

      if the security boundary is baked into the code or the design of the system, and also assuming it doesn't introduce more bugs, then I agree[1]. Security controls that get introduced on top do risk an increase in attack surface. An additional interface is by definition a an additional "surface", the question is if it can be attacked.

      [1] you could still argue that more lines of code always means more bugs (but let's assume it's very close to bullet-proof)

      1 reply →