Comment by chatmasta
2 years ago
Reading between the lines of TFA, it seems the researchers may also suspect that to be the case:
> Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.
However, keep in mind that this level of "bugdooring" is possible without Apple's explicit cooperation. In fact, the attackers don't even need to force a bug into the code. It would probably be sufficient to have someone on staff who is familiar with the Apple hardware development process (and therefore knows about the availability of these tools), or to simply get a copy of the firmware's source code. Sophisticated attackers likely have moles embedded within Apple. But they don't even need that here; they could just hire an ex-Apple employee and get all the intel they need.
well of course nobody would have NSA_friendly_override() in the source
plausible deniability is essential in such cases, hence the term bugdoor
This is the same conspiracy mindset of flat earthers, and you deserve your own netflix mockumentary over it.
Because a bug is a bug, it's very nature means you cannot prove it isn't malicious, therefore you take it as positive proof of malice and sit pretty bc no one can prove a negative.
We had backdoors, then PRISM revealed. We have bugdoors now. No reason to think three letter glowies would like to give up any amount of control. They have the 'power' to straight up lie to the congress under oath, see Clapper.
Are you posting from Eglin AFB? Which outfit are you with?
Your double negative made me laugh.
In all seriousness, I wish I could tell you that you're wrong, but I can't.
I always get weird consultants reaching out to me on LinkedIn asking for deets on my org's layout and - curiously - our tech stack. They offer something like $500+ an hour but I don't want to be complicit in some compromise. Private intelligence is such a fascinating industry.
Since they've gone to the trouble of protecting it with an insecure hash, couldn't they also have designed this hardware feature so that it could be completely disabled until the device is rebooted? This vulnerability doesn't persist through reboots, so it would be sufficient to have the firmware lock the feature out during startup outside of development or manufacturing contexts.
> This vulnerability doesn't persist through reboots
I suspect, once you stop receiving data from the device, you just text it the invisible message every few minutes until you start getting data again.