There's a fundamental category error at play here: exploit chains like this one and the one behind FORCEDENTRY[1] cost millions, if not tens of millions, of dollars to discover and weaponize, even before operationalization.
The people finding and building these chains are doing so as part of nation-state intelligence operations; they go well beyond what any reasonable civilian threat model contains.
Put another way: if someone in a competent nation state's IC decides that you're worth $10+ million dollars to compromise, they are going to get you. This is true whether you have an Android, an iPhone, or a Tamagotchi. The only thing that sets Apple apart here is that they've historically beaten Google to the punch on mitigations for these kinds of exploits. But from a threat modeling perspective, this attack is not comparable to the kind that most people have to deal with. Treating it as indicative of an overall security differentiator will not help you make ordinary security decisions, because anybody who gets this kind of attention will be Mossad'ed upon[2].
Sure they do, and yet at the bottom of them we keep finding.. iMessage. Which is like a funnel that takes untrusted external input and feeds it into various ancient unmaintained native code blobs that were thrown into iOS for the "time to market". This time it's an 90s Apple extension to TrueType in a 90s Apple library that presumably no font on an iPhone actually uses, last time it was the 90s fax machine image compression algorithm in a never updated open source library. You see, the full exploit cost many many millions, but at the bottom there are entirely self-inflicted basic failures.
It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
> It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
It’s happening! Admittedly it’s happening slowly, but it is happening. PostScript support recently got stripped out of MacOS and iOS explicitly because the security risk was too great, and effort to make parsers and renders safe was greater than any residual benefit from the postscript format.
It also looks like the “fix” for one for the TrueType exploit was to simply strip out the ancient extension because it’s not used anymore. As for why it didn’t happen before now, that probably just because nobody knew it still existed.
It may cost a million, but it doesn't follow that every use(r) costs the same (could even also call this a category error).
Neither is "going to get you" a given, maybe another agency is in charge of the alternative methods of getting you, and they have different priorities that doesn't include your target (or alternative ways are much more expensive or too slow to be worth it)
The point is that it's incorrect to think of the US (or any other country's) IC as a force of nature, blasting out 0days to random civilians just for kicks. These things are expensive, very expensive, and are carefully orchestrated. They don't look anything like the average civilian's security breach, which is somewhere between "accidentally leaked their own password" and "TSA asks you to unlock your phone."
Your “threat model analysis” takes for granted that a “civilian” is a billion times less important than a “nation-state”. It makes no sense to waste any time analyzing anything after such a conclusion. Therefore, something is wrong here.
I think you've misunderstood. The point was that there are (to simplify) two different threat models at play here: one where your most powerful adversary is somewhere between your family and domestic law enforcement, and another where you are worth $10+ million to a nation state.
99.99% of the world lives in threat model 1; our goal as security minded people is to protect these people. These people want general purpose networked computers in their pockets.
0.01% of the world lives in threat model 2; our goal is also to protect these people. But these people don't get protected while also having general purpose networked computers in their pockets.
Both groups are civilians, and both deserve security. But they also have different demands; if Apple forced Lockdown Mode's usability restrictions onto a billion people tomorrow, a large percentage of them would switch to materially less secure hardware and software vendors.
Why is it that everyone balks at including these shadowy government agencies in threat models? It feels like people just don't want the heat. Would people just give up if it was some corrupt narcostate instead?
They've proven numerous times they couldn't care less about the rights of their own citizens. The US agencies in particular can't even muster any respect for their own allies. I don't even want to imagine what they feel justified in doing to foreigners. They're basically a threat to everyone on earth at this point and we all need the ability to defend against people like them.
So it costs millions to compromise someone? We need to find ways to make it cost billions then. Then we make it cost trillions. They should have to commit crimes against humanity in order to get anyone at all.
Nobody's balking at it. Apple and Google both dedicate significant engineering efforts towards making these kinds of exploit chains even more expensive and unreliable. See for example Lockdown Mode in iOS 16.
The point is this: good security means being able to intelligibly state your threat model and respond to its specific capabilities. Failing to do this results in all kinds of muddied thinking, making it harder to defend against more quotidian adversaries. If your threat model genuinely involves the US IC, then turning on Lockdown Mode is about the best you can do short of throwing your phone in the ocean. By all appearances, that would have prevented this chain.
Not that I know of. There are other hardware-ish exploits (like checkm8), but I think most have been purely software.
(Hopefully what I said wasn't interpreted as a value judgement about hardware security specifically -- the only point I was trying to make is that ICs spend significant resources discovering exploits on all of these platforms.)
> “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures... "
-Kaspersky researcher Boris Larin
supports your point but it's not an easy argument to win either way.
It's "everyone can see it so the good guys will find it first" vs "bad guys have harder time discovering vulns but once they do they have gold"
Imho end of the day, open source vs closed doesn’t matter for number/severity of security issues and ends up just being ideological posturing. The bugs exist for a variety of other reasons and tend to have the same root causes attached.
OSS has other considerations though around security. Flaws may be easier to identify and either exploit or fix. Flaw fixing is trickier though because you need to do it in such a way as to not advertise it to the world either before it’s sufficiently deployed.
It has never been said it's your security. It's their security, of their data, on their devices, against their threats and competitors/partners. The user is just an unprivileged data input daemon digitizing “unique personal experiences”, or some other corporate language term.
It's easy to laugh at Juicero users, it's harder to notice the bigger elephant in the room.
There's a fundamental category error at play here: exploit chains like this one and the one behind FORCEDENTRY[1] cost millions, if not tens of millions, of dollars to discover and weaponize, even before operationalization.
The people finding and building these chains are doing so as part of nation-state intelligence operations; they go well beyond what any reasonable civilian threat model contains.
Put another way: if someone in a competent nation state's IC decides that you're worth $10+ million dollars to compromise, they are going to get you. This is true whether you have an Android, an iPhone, or a Tamagotchi. The only thing that sets Apple apart here is that they've historically beaten Google to the punch on mitigations for these kinds of exploits. But from a threat modeling perspective, this attack is not comparable to the kind that most people have to deal with. Treating it as indicative of an overall security differentiator will not help you make ordinary security decisions, because anybody who gets this kind of attention will be Mossad'ed upon[2].
[1]: https://en.wikipedia.org/wiki/FORCEDENTRY
[2]: https://www.usenix.org/system/files/1401_08-12_mickens.pdf
Sure they do, and yet at the bottom of them we keep finding.. iMessage. Which is like a funnel that takes untrusted external input and feeds it into various ancient unmaintained native code blobs that were thrown into iOS for the "time to market". This time it's an 90s Apple extension to TrueType in a 90s Apple library that presumably no font on an iPhone actually uses, last time it was the 90s fax machine image compression algorithm in a never updated open source library. You see, the full exploit cost many many millions, but at the bottom there are entirely self-inflicted basic failures.
It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
> It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
It’s happening! Admittedly it’s happening slowly, but it is happening. PostScript support recently got stripped out of MacOS and iOS explicitly because the security risk was too great, and effort to make parsers and renders safe was greater than any residual benefit from the postscript format.
It also looks like the “fix” for one for the TrueType exploit was to simply strip out the ancient extension because it’s not used anymore. As for why it didn’t happen before now, that probably just because nobody knew it still existed.
Absolutely no disagreement there. iMessage's attack surface is ludicrously large for the actual behavior it delivers on the average user's phone.
It may cost a million, but it doesn't follow that every use(r) costs the same (could even also call this a category error).
Neither is "going to get you" a given, maybe another agency is in charge of the alternative methods of getting you, and they have different priorities that doesn't include your target (or alternative ways are much more expensive or too slow to be worth it)
The point is that it's incorrect to think of the US (or any other country's) IC as a force of nature, blasting out 0days to random civilians just for kicks. These things are expensive, very expensive, and are carefully orchestrated. They don't look anything like the average civilian's security breach, which is somewhere between "accidentally leaked their own password" and "TSA asks you to unlock your phone."
1 reply →
Your “threat model analysis” takes for granted that a “civilian” is a billion times less important than a “nation-state”. It makes no sense to waste any time analyzing anything after such a conclusion. Therefore, something is wrong here.
I think you've misunderstood. The point was that there are (to simplify) two different threat models at play here: one where your most powerful adversary is somewhere between your family and domestic law enforcement, and another where you are worth $10+ million to a nation state.
99.99% of the world lives in threat model 1; our goal as security minded people is to protect these people. These people want general purpose networked computers in their pockets.
0.01% of the world lives in threat model 2; our goal is also to protect these people. But these people don't get protected while also having general purpose networked computers in their pockets.
Both groups are civilians, and both deserve security. But they also have different demands; if Apple forced Lockdown Mode's usability restrictions onto a billion people tomorrow, a large percentage of them would switch to materially less secure hardware and software vendors.
2 replies →
Why is it that everyone balks at including these shadowy government agencies in threat models? It feels like people just don't want the heat. Would people just give up if it was some corrupt narcostate instead?
They've proven numerous times they couldn't care less about the rights of their own citizens. The US agencies in particular can't even muster any respect for their own allies. I don't even want to imagine what they feel justified in doing to foreigners. They're basically a threat to everyone on earth at this point and we all need the ability to defend against people like them.
So it costs millions to compromise someone? We need to find ways to make it cost billions then. Then we make it cost trillions. They should have to commit crimes against humanity in order to get anyone at all.
Nobody's balking at it. Apple and Google both dedicate significant engineering efforts towards making these kinds of exploit chains even more expensive and unreliable. See for example Lockdown Mode in iOS 16.
The point is this: good security means being able to intelligibly state your threat model and respond to its specific capabilities. Failing to do this results in all kinds of muddied thinking, making it harder to defend against more quotidian adversaries. If your threat model genuinely involves the US IC, then turning on Lockdown Mode is about the best you can do short of throwing your phone in the ocean. By all appearances, that would have prevented this chain.
There haven’t really been all that many hardware exploits for us to judge Apple on this, have there?
Not that I know of. There are other hardware-ish exploits (like checkm8), but I think most have been purely software.
(Hopefully what I said wasn't interpreted as a value judgement about hardware security specifically -- the only point I was trying to make is that ICs spend significant resources discovering exploits on all of these platforms.)
> “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures... " -Kaspersky researcher Boris Larin
supports your point but it's not an easy argument to win either way. It's "everyone can see it so the good guys will find it first" vs "bad guys have harder time discovering vulns but once they do they have gold"
To be fair, that was just Kaspersky taking a jab at Apple, after being absolutely gutted by hackers because of their own poor security posture.
I don’t really see anything wrong with their security posture here.
10 replies →
> it turns out that more people having access to the source code makes it more secure.
The OpenSSL debacle kinda disproved that point, didn’t it?
And just looking up the Linux CVE list https://www.cvedetails.com/vulnerability-list/vendor_id-33/p...
Imho end of the day, open source vs closed doesn’t matter for number/severity of security issues and ends up just being ideological posturing. The bugs exist for a variety of other reasons and tend to have the same root causes attached.
OSS has other considerations though around security. Flaws may be easier to identify and either exploit or fix. Flaw fixing is trickier though because you need to do it in such a way as to not advertise it to the world either before it’s sufficiently deployed.
How so? You need to quantify it; e.g., something like number of bugs found per year per LOC.
It has never been said it's your security. It's their security, of their data, on their devices, against their threats and competitors/partners. The user is just an unprivileged data input daemon digitizing “unique personal experiences”, or some other corporate language term.
It's easy to laugh at Juicero users, it's harder to notice the bigger elephant in the room.
This involved an extremely low level hardware exploit and a ton of other insanity. It is really nothing alike to Microsoft vs open source.
[flagged]
Does iPhone boast security? Pretty sure Pixel phones were always way ahead of the pack in terms of security.
Wake me up when google drive gets e2ee.