← Back to context

Comment by mindslight

10 years ago

The conclusion of the original post isn't wrong though, despite the exact details being out of date by (a scant) 5 years.

I have a hard time believing that HSIC is the entire extent of interconnection, with the processors being on the same die and all. Are you asserting that the baseband and application processors use completely independent rams and flashes? Independent memories seem more expensive (price+power) than a single shared bank with MMU, but since the storage requirements of the baseband are known at design time then perhaps not terribly.

If they aren't independent memories, then the term DMA actually still applies even if the interconnect protocol is not based on it. Mobile literature is quite inaccessible (another symptom), but everything I've seen refers to having an MMU as the advancement. I have a hard time believing that would be controlled by the application processor (leaving the baseband vulnerable), but please correct me with specifics if this is wrong.

You can use this same kind of logic to suggest that any system is broken.

Some of the cores on a complicated mobile device might have their own memories, and some of them might be isolated from the memories of other cores with silicon. I'm sure there are devices where there are insecure cores with no isolation at all --- just like there's a ton of C code that will read a URL off the wire into a 128 buffer on the stack.

The problem you're suggesting device designers have to solve --- allowing core A access only to a range of the total memory available "on the die" --- isn't a hard one.

From the suggestions you've made in your comments --- and I mean this respectfully --- I think you'd be very surprised by the hardware systems design in a modern mobile device. They are in some ways more sophisticated than the designs used for PCs.

So, the point of this subthread is that mobile devices are much more complicated than the simplistic ("no IOMMU? the baseband can read/write AP memory!") model proposed in the article. It makes an OK overall point (we should care about baseband security!) but uses a very flawed argument to get there.

  • Actually the real problem I'm suggesting device designers need to solve is doing the bare minimum to make the public (ie decentralized vulnerability seekers) believe that the application processor is likely secure from the cell network. Doubly so because the entire history of their industry has been the exact opposite philosophy.

    I have no doubt mobile chipsets contain a surprising amount of complexity. I'd love to be surprised by it! But I've only ever run across vague references to various improvements, which are worth just as much as saying "Our code uses AES!".

    It's obviously easy to restrict a core to certain memory ranges. But how is that restriction set? Is it fixed in the mask (leading to inflexibility), or is it set through registers? The bullet point is enough to satisfy a PHB's sense of security, but we know it's the details of those loose ends where the exploits lie. And the threat model of Qualcomm is quite different from the threat model of a phone's owner.

    Simplistic points like the OP are really a symptom of this not knowing. Can you blame them for not knowing the exact vulnerability? It's like someone picking on a binary blob, saying it can contain a backdoor password. Well, the industry having moved from backdoor passwords to challenge-response isn't really a defense to the overriding point, is it?