Comment by mike_hearn
2 years ago
I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.
So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?
But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.
This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
> What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function?
I agree. This appears to likely be an intentional backdoor injected at the hardware level during design. At such a low-level I think it could have been accomplished with only a handful of employees in on it. There would have been no need to subvert Apple from the top down with large numbers of people at many levels being privy.
In early silicon there can be a bunch of registers and functions implemented for testing which are later pulled out. Except maybe one set of registers doesn't get pulled but instead a door knock is added with a weak hash function, making the registers invisible to testers and fuzzing.
It seems a little too convenient that the door knock hash was weak. After all, strong hash functions aren't unknown or hard. The reason it had to be a weak hash function was to create "plausible deniability". If it was a strong hash then once any exploitation was discovered there would be no denying the vuln was intentionally placed. If it really was just a test DMA function that someone supposedly 'forgot' to remove before production silicon, I can't think of a reason to have it behind any kind of door knock in the first place.
I read that it was patched by adding these addresses to the "access denied" list. While I don't know anything about Apple security, I'm stunned that any such low-level access list isn't 'opt-in' instead of 'opt-out'. If it was 'opt-in' it seems like any such 'undocumented' register addresses would by denied by default. And if they were on the 'opt-in' list, yet remained undocumented, then it would be obvious to anyone looking at the security docs that something was amiss.
It reminds me of Linux backdoor that also was made to look like a mistake (== replaced with =) [1].
[1] https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-...
It should be very easy to add one without somebody noticing. This is the same Apple which shipped a version of macOS for months that added the ability to login to root with any password only a few years ago.
Their review processes are so incompetent even one of the most security critical components, root login, let a totally basic “fail your security 101 class” bug through. It is absolutely inexcusable to have a process that bad and is indicative of their overall approach. As they say, “one cockroach means an infestation”.
Mistakes happen but Apple's reputation for strong security is well deserved. They invest heavily and the complexity of this exploit chain is evidence of that. Linux has had its fair share of trivial root login exploits that somehow got through code review.
No, that is a level of error similar to delivering cars with no airbag in them for months. In any other industry that would indicate a unimaginable level of process failure. Only in commercial software are egregious, basic mistakes swept under the rug as “mistakes happen”.
Just to list a few process failures off the top of my head.
No proofs of specification conformance. No specification conformance tests. No specification. No regression testing. No regression testing of common failure modes. No testing of common failure modes. No enhanced review for critical components. No design conforming to criticality requirements. No criticality requirements. No intention to establish criticality requirements.
In actual safety and security critical software development you do all of those except maybe the first. Doing none of them is rank incompetence and clear evidence you do not know the first thing about actual security that can protect against real professionals. And fancy that, Apple can not and never has against attackers with minimal resources like small teams with only a few million dollars.
We can talk about a reputation for “strong” security when they can protect against the standard, commonplace 10M dollar attacks we see every day.
2 replies →
Where do Apple have a reputation for strong security?
Compared to other mainstream operating system, they seem to constantly be the last to introduce things like stack canaries, non executable memory segments, and all that which is considered best practice now.
1 reply →
I’m not trying to defend Apple but I think that line of thinking is pretty cynical and could be used to condemn basically any company or open source project that attracts enough interest for attackers.
APTs probably routinely identify and target such developers. With multi-million dollar payouts for single bugs and high state level actor attention, employee profiling is clearly a known attack vector and internal security teams probably now brief on relevant opsec. FWIW the only Apple kernel developer I knew has somewhat recently totally removed themselves from LinkedIn.
People pretend that bugs don't exist.
Not the software kind, the good old listening devices.
There's a deep tool chest of spying. Heck, remember when keyboard presses were harvested through a wall? That was at least a decade ago. I think audio of key presses can be disambiguated.
People who work on the kernel are not hard to find.
> But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.
That rule should only be applied in the normal world. The world of security where you know bad actors are out there trying to do stuff, it doesn't apply. And there are examples of spy types injecting plans to go halfway with security for their purposes - not that this proves the origin of a given plan, incompetence is still one possibility, it just returns to original point, that this stuff is mysterious.
As a defender, you should treat malice and incompetence as functionally equivalent. Save the attribution for the post-mortem (or better yet, don't let it come to that).
> It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
My theory is that defensive cyber security is so hard that it's literally easier to hack the entire world(with a focus on security people) to see if anyone has breached your systems.
Honeypots? Honeypots seem to be consistently under exploited.
A good honeypots is a counterattack on intruder psychology in a host of ways.
> I don't think hiring an ex-Apple dev would let you get the needed sbox
That'd probably depend on which team the dev worked in. If they were in the right team, then it might.
What I mean is that (assuming the sbox values are actually random) you couldn't memorize it short of intensive study and practice of memory techniques. If the "sbox" is in reality some easily memorizable function then maybe, but even then, how many people can remember long hex values from their old jobs?
Two points:
a) If a person is using those values daily for years (or even a couple of months), then it's very likely they'd have memorized them
b) Sometimes just knowing the concept exists for sure is good enough, as you can then go and brute force things until you've worked out the values
But having predictably generated sequence of numbers is what cryptographers prefer
https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number