Comment by fsflover

4 years ago

The point is that the proprietary software has no access to the RAM or CPU and plays absolutely no role whatsoever in the device usage. I personally agree that it can be called "hardware" and don't care that it has another CPU.

The baseband is on the upgradable M.2 card, also has no access to anything. It can even be killed with a hardware switch. The best smartphone you can find if you care about it. Nobody says that other blobs are fine, but it's already a huge step to the freedom.

> The point is that the proprietary software has no access to the RAM or CPU and plays absolutely no role whatsoever in the device usage.

The proprietary software literally configures the RAM on the phone. It is critical for making the RAM work, of course it has access to the RAM! Supposedly it should be quiesced after training, but I haven't seen any security analysis that claims that firmware couldn't just take over the system while it runs.

But they added an extra two layers of indirection, even though the blob ends up running on the same CPU with the same privileges in the end anyway, because all that obfuscation let them get in via the FSF's "secondary processor" exception somehow. Even though the end result is the same, and you're still running a blob to perform a critical, security-relevant task.

If the goal is security ("blobs can't take over my system") and stuff running during the boot process doesn't count, then Apple's M1 machines are on precisely the same level as the Librem 5: they also run blobs on boot, and at runtime all remaining blobs on separate CPUs are sandboxed such that they can't take over the main RAM/CPU.

  • You are of course right that technically it has the access to the RAM.

    > Supposedly it should be quiesced after training, but I haven't seen any security analysis that claims that firmware couldn't just take over the system while it runs.

    I was under impression that it was the whole point of the exercise. It would be interesting to know otherwise.

    > even though the blob ends up running on the same CPU with the same privileges in the end anyway

    This is not how I understood it. The Librem 5 stores these binary blobs on a separate Winbond W25Q16JVUXIM TR SPI NOR Flash chip and it is executed by U-Boot on the separate Cortex-M4F core. From here: https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque....

    • > I was under impression that it was the whole point of the exercise. It would be interesting to know otherwise.

      It absolutely wasn't. Look into it. In every case, the blob ends up running on the RAM controller CPU and supposedly finishes running and is done. The whole point of the exercise was obfuscating the process which is used to get to that point such that it avoided the main CPU physically moving the bits of the blob from point A to point B. Really.

      > This is not how I understood it. The Librem 5 stores these binary blobs on a separate Winbond W25Q16JVUXIM TR SPI NOR Flash chip and it is executed by U-Boot on the separate Cortex-M4F core.

      That is incorrect (great, now they either don't know how their own phone works or they're lying - see what I said about obfuscation? It's great for confusing everyone).

      The M4 core code is not proprietary; it's the pointless indirection layer they wrote and it is not loaded from that SPI NOR flash. It's right here:

      https://source.puri.sm/Librem5/Cortex_M4/-/tree/master

      That open source code, which is loaded by the main CPU into the M4 core, is responsible for loading the RAM training blob from SPI flash (see spi.c) and into the DDR controller (see ddr_loader.c).

      The actual blob then runs on the PMU ("PHY Micro-Controller Unit") inside the DDR controller. This is an ARC core that is part of the Synopsys DesignWare DDR PHY IP core that NXP licensed for their SoC. Here, cpu_rec.py will tell you:

        firmware/ddr/synopsys/lpddr4_pmu_train_2d_imem.bin
            full(0x5ac0)   ARcompact                          chunk(0x4e00;39)    ARcompact 
      

      The normal way this is done is the DDR training blob is just embedded into the bootloader like any other data, and the bootloader loads it into the PMU. Same exact end result, minus involving a Cortex-M4 core for no reason and minus sticking the blob in external flash for no reason. Here, this is how U-Boot does it on every other platform:

      https://github.com/u-boot/u-boot/blob/master/drivers/ddr/imx...

      Same code, just running on the main CPU because it is absolutely pointless running it on another core, unless you're trying to obfuscate things to appease the FSF. And then the blob gets appended to the U-Boot image post-build (remember this just gets loaded into the PMU, it never touches the main CPU's execution pipeline):

      https://github.com/u-boot/u-boot/blob/master/tools/imx8m_ima...

      Purism went out of their way and wasted a ton of engineering hours just to create a more convoluted process with precisely the same end result, because somehow all these extra layers of obfuscation made the blob not a blob any more in the FSF's eyes.

      The security question here is whether that blob, during execution, is in a position to take over the system, either immediately or somehow causing itself to remain executing. Can it only talk to the RAM or can it issue arbitrary bus transactions to other peripherals? Can it control its own run bit or can the main CPU always quiesce it? Can it claim to be "done" while continuing to run? Can it misconfigure the RAM to somehow cause corruption that allows it to take over the system? I have seen no security analysis to this effect from anyone involved, because as far as I can tell nobody involved cares about security; the whole purpose of this exercise obviously wasn't security, it was backdooring the system into RYF compliance.

      10 replies →