← Back to context

Comment by mike_hearn

2 years ago

That's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).

I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.

As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.

The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.

Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.

Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.

I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.

> If the hardware feature was openly documented it'd have been found much, much sooner.

Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."

This is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.

or Apple just implemented this "API" for them, because they've asked nicely

  • Or they have assets working at Apple... or they hired an ex-Apple employee... etc.

    That's the problem with this sort of security through obscurity; it's only secure as long as the people who know about it can keep it secret.

    • I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.

      So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?

      But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.

      If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.

      This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.

      23 replies →

    • Go onto LinkedIn, search for Apple Global Security staff and you’ll get an answer. The head of and much of the staff are ex-USIC people. Now perform those searches over time and do a little OSINT and observe a revolving door where they are not so ex-.

  • I wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.

    I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.

  • I think the way it’s done is that the code is presented to them to use, Apple probably don’t even code those parts themselves.

So much misinformation in this thread. It’s a Hamming ECC, as described here[1].

[1] https://social.treehouse.systems/@marcan/111655847458820583

  • More evidence for an ECC, obtained by looking at how the 10 output bits of the function depend on its 256 input bits:

    Each of the 10 parity bits output by the function is the xor of exactly 104 of the 256 input bits.

    Each of the 256 input bits contributes to (= is xor-ed into) either 3 or 5 of the 10 parity bits.

    This is in line with the SEC-DED (single error correction, double error detection) ECC construction from the following paper:

    https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hs...

    Translating the above observations about the function into properties of the H matrix in the paper:

    Each row of the matrix contains an identical number of ones (104).

    Each column of the matrix contains an odd number of ones (3 or 5).

  • Very interesting, thanks. Summarizing that thread:

    - The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)

    - The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.

    - Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.

    - This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.

    So that's all reassuring.

  • Why do you need error-correction code for a debugging feature though? I would not protect debug registers with a hash.

    • Bc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'

>also is indicative of a massive organization with truly abysmal levels of internal siloing.

Or a joint project between several organizations.

  • Or, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.

>there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes).

Then how did these researchers do it? Not being cheeky, I just don't follow security super closely.

Seems likely a compromise at the GPU or ARM side as equally possible routes.

  • What do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.

    Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…

    • CoreSight is not Apple proprietary, it’s part of ARM’s offering. This vulnerability appears to be part of CoreSight.

      > but I believe the ARM one is basically from-scratch

      You are wrongly believing then. There’s still a bunch of ARM IP in their CPU.

> truly phenomenal research capabilities

Maybe a nation state, e.g., APT?

  • Being able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)

    And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.

    • I thought there were Israeli private services/ contractors providing APT as a service to, for example, Saudi Arabia or other despotic regimes.

      I think that was in the news back in the sochi Olympics. The value of cyber capabilities is only going up with time

      The siloing may be due to multiple contractors. I imagine these exploit vendors are protective of their arsenal of attacks.

      Because as has been said many times, the three letter agencies aren't exempt from the curse of government employee mediocrity.