A case against security nihilism

4 years ago (blog.cryptographyengineering.com)

Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA

  • I'll conclude with a philosophical note about software design: Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administators; but it's time for software engineers to start applying the same engineering principles within individual applications as well.

    -cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....

    • I've worked as a structural engineer (EIT) on bridges and buildings in Canada before getting bored and moving back into software.

      There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.

      Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.

      Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.

      14 replies →

    • Also pointing your (or anyone's finger) at the already overworked and exploited engineers in many countries is just abysmal in my opinion. It's not an engineers decision to make what the deadlines of finishing a software is. Countless number of companies are controlled by business people. So point your finger at them because they are who don't give a flying f*&% whether the software is secure or not. We engineers are very well aware with both the need and the implications of security. So this kind of name shaming must be stopped by the security community now and forever in my opinion.

      1 reply →

    • I think this quote is fundamentally wrong and intentionally misleading. The equivalent question would be "can we find any cracks on it?" Which makes complete sense. And in fact it is frequently asked during inspections. Just like the security flaw question should be asked in the same vein.

    • This is one of the best examples I’ve ever seen supporting the claim that analogies aren’t reasoning.

      Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.

      3 replies →

  • > I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

    I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.

    • Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.

      But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?

      If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.

      If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

      Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.

      Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.

      28 replies →

    • What’s with the backlash against Rust? It literally is “just another language”. It’s not the best tool for every job, but it happens to be exceptionally good at this kind of problem. Don’t you think it’s a good thing to use the right tool for the job?

      30 replies →

    • I'm a security professional so it's based on being an experienced expert, not some sort of hype or misplaced enthusiasm.

    • The article we are commenting on is about targeted no-interaction exploitation of tens of thousands of high profile devices. I think this is one of the areas where there is a very clear safety value (not just theoretical).

    • Whole classes of bugs -- the most common class of security-related bugs in C-family languages -- just go away in safe Rust with few to no drawbacks. What's irrational about the exuberance here? Rust is a massive improvement over the status quo we can't afford not to take advantage of.

    • > how much value that safety really has over time

      Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.

      Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.

    • There's literally zero evidence that a program written in Rust is actually practically safer than one written in C at the same scale. And there won't be any evidence of this for some time because no Rust program is as widely deployed as an equivalent highly used C program.

      4 replies →

  • There's still a lot of macho resistance to using safe languages, because "I can write secure code in C!"

    "You" probably can. I can too. That's not the point.

    What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?

    That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.

    Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.

    Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!

    If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.

    • > "You" probably can. I can too. That's not the point.

      I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.

      Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.

      9 replies →

  • I have used a Yubikey for years. Nothing is perfect, but as you mentioned, the only hacks of them have been with persistent physical access, or somehow getting the end user to hit the activate button tens of thousands of times.

    On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.

    As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.

    Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.

    It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.

  • This depends...

    I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.

    No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).

    • I'm pretty sure I've written on HN before that SMS 2FA doesn't do much against phishing, which we know is a big problem, but worse it creates a false reassurance.

      The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.

      5 replies →

  • For anyone who's fresh to cyber security, the fundamental axiom of it is that anything can be cracked, only a matter of computations (time*resource). Just as the dose makes the poison (sola dosis facit venenum).

    Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.

    • If that’s a fundamental axiom of cyber security then it’s obvious that it’s a field of fools. This is a poor, tech-driven understanding of security that will leave massive gaps in its application to technology.

  • From the video: "Cloud computing is really a fancy name for someone else's computer."

    He goes on to discuss the expansion of "trust boundaries".

    Big Tech: Use our computers, please!

  • > The use of memory unsafe languages for parsing untrusted input is just wild.

    I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.

    At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.

    And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.

    So it's not like Apple isn't trying to improve things.

    • These are stopgaps, not long term solutions.

      Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.

      All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.

  • I was with you until the parsing with memory unsafe languages. Isn’t that exactly the kind of “random security not based on a threat model” type comment you so rightly criticised in the first half of your comment?

    • I think you must have misunderstood the point the parent comment was trying to make. Memory-safety issues are responsible for a majority of real-world vulnerabilities. They are probably the most prevalent extant threat in the entire software ecosystem.

      10 replies →

    • Based on the hundreds, perhaps thousands of critical vulnerabilities that are due directly to parsing user input in memory-unsafe languages, usually resulting in remote code execution, how's this for a threat model: attacker can send crafted input that contains machine code that subsequently runs with the privileges of the process parsing the input. That's bad.

    • The attack surface is the parser. The ability to access it is arbitrary. I can't build a threat model beyond that for any specific case, but in the case of a text messaging app I absolutely expect "attacker can text you" to be in your threat model.

    • There are very few threat models that a memory unsafe parser does not break.

      Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.

      4 replies →

    • I mean, the threat model is that 1. Memory leaks/errors are bad 2. Programmers make those mistakes all the time 3. Using memory safe languages is cheap Therefore, 4. We should use memory safe languages more often

  • “ The use of memory unsafe languages for parsing untrusted input is just wild.” Indeed! The continued casualness of attitudes towards input validation continues to floor me. “computer science” is anything but :p

  • > I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust

    Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?

    I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...

    • Yeah I mean, 100%, I hate that I run my code on Linux, which I don't consider to be a well secured kernel. It's an unfortunate thing, but such is life.

      But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.

      I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.

  • > Just the other day I suggested using a yubikey

    The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

    Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.

    We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.

    • > The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

      But what does this have to do with the FIDO authenticator?

      At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".

      None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.

      I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.

      The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.

      2 replies →

    • > Hand a YubiKey to your CEO and their secretary.

      Well, I'm the CEO lol so we have an advantage there.

      > The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.

      Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.

      edit: Wait, per month? No no.

      > We don't need better crypto.

      FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.

  • I am scared to death of rust.

    It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.

    Anything so evangelized is met with strong skepticism here.

    • What scares me about Rust is that people put so much trust in it. And part of that is because of what you mention, the hype in other words.

      I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.

      So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.

      1 reply →

  • From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."

    "only by a nation-state"

    This ignores the possibility that the company selling the solution could itself easily defeat the solution.

    Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".

    Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.

    We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.

    Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.

    The comment on that Ars page is more realistic than the article.

    Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.

    • Yes, if you don't trust Google don't use a key from Google. Is that what you're trying to say? If your threat model is Google don't buy your key from Google. Do I think that's probably a stupid waste of thought? Yes, I do. But it's totally legitimate if that's your threat model.

      1 reply →

    • > This ignores the possibility that the company selling the solution could itself easily defeat the solution.

      How do you imagine this would work?

      The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.

      I think you're imagining a lot of moving parts to the "solution" that don't exist.

      14 replies →

The article says that although "you can't have perfect security," you can make it uneconomical to hack you. It's a good point, but it's not the whole story.

The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

I would never think to spend a million dollars on securing my home network (including other non-dollar costs like inconveniencing myself). Let's suppose that spending $1M would force the US NSA to spend $10M to hack into my home network. The people making that decision aren't spending $10M of their own money; they're spending $10M of the government's money. The NSA doesn't care about $10M in the same way that I care about $1M.

As a result, securing yourself even against a dedicated attacker like Israel's NSO Group could cost way, way more than a simple budget analysis would imply. I'd have to make the costs of hacking me so high that someone at NSO would say "wait a minute, even we can't afford that!"

So, sure, "good enough" security is possible in principle, I think it's fair to say "You probably can't afford good-enough security against state-level actors."

  • Whether $10M is a lot of money to the NSA or not is also only part of the story. The remaining part is how much they value the outcome they will achieve from the attack.

    That reminds me somehow of an old expression: If you like apples, you might pay a dollar for one, and if you really like apples you might pay $10 for one, but there's one price you'll never pay, no matter how much you like them, and that's two apples.

    • You're right. It's only part of the story. Another part of the story is that the cost of these attacks is so far below the noise floor of any state-level actor that raising their costs will probably have perverse outcomes. For the same reason you don't routinely take half a course of antibiotics, there are reasons not to want to deliberately drive up the cost of exploits as an end in itself. When you do that, you're not hurting NSO; you're helping them, since their business essentially boils down to taking a cut.

      We should do things that have the side effect of making exploits more expensive, by making them more intrinsically scarce. The scarcer novel exploits are, the safer we all. But we should be careful about doing things that simply make them cost more. My working theory is that the more important driver at NSA isn't the mission as stated; like most big organizations, the real driver is probably just "increasing NSA's budget".

      3 replies →

  • > The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

    They also have something else most people don't have: time. Nation-states and actors at that level of sophistication can devote years to their goals. This is reflected in the acronym APT, or Advanced Persistent Threat. It's not that just once they have hacked you they'll stick around until they are detected or have everything they need, it's also that they'll keep trying, playing the long game, waiting for their target to get tired or make a mistake, and fail to keep up with advancing sophistication?

    In your example, you spend $1M on your home network, but do you keep spending the money, month after month, year after year, to prevent bitrot? Equifax failed to update Struts to address a known vulnerability, not just because of cost but also time. It's cost around $2billion so far, and the final cost might never really be known.

  • Most organizations should not really be factoring state level actors into their risk assessment. It just doesn't make sense. If you are an actual target for state level actors you likely will know about it. You will also likely have the funding to protect yourself against them. And if you can't, that isn't a failing of your risk assessment decision making.

    • An illustrative counterexample of "if you are an actual target for state level actors you likely will know about it" is the case of Intellect Services, a small company (essentially, father and daughter) developing a custom accounting product (M.E.Doc) that assists preparation of Ukrainian tax documents.

      It turned out that they were a target for state level actors, as their software update distribution mechanism was used in a "watering hole attack" to infect many companies worldwide (major examples are Maersk and Merck) in the NotPetya data destruction (not ransomware, as it's often wrongly described) attack, causing billions of dollars in damage. Here's an article about them https://www.bleepingcomputer.com/news/security/m-e-doc-softw...

      In essence, you may be an actual target for state level actors not because they care about you personally, but because you just supply some service to someone whom they're targeting.

      1 reply →

    • Meanwhile, the biggest state-level actors are developing offensive capabilities at the scale of "we can wipe out everything on the enemy's entire domestic network" which includes thousands of businesses of unknown value. The same way strategic nuclear weapons atomize plenty of civilian infrastructure.

      Sure, in that kind of event, an org might be more concerned with flat out survival. But you never know if you'll be roadkill. And once that capability is developed, there is no telling how some state-level actors are connected to black markets and hackers who are happy to have more ransomware targets. Some states are hurting for cash.

  • "So, sure, "good enough" security is possible in principle, I think it's fair to say "You probably can't afford good-enough security against state-level actors.""

    I don't think so. State level actors also have limited ressources (and small states have very limited ressources) and everytime they deploy their tools, they risk that they get discovered and anyalized and added to the antivirus heuristics and with that rendered allmost worthless. Or they risk the attention of the intelligence agencies of your state. So when that happens, heads might be rolling, so the heads want to avoid that.

    So if there is a state level group looking for easy targets for industry espionage - and they find a tough security, where it looks like people care - I would say chances are that they go look for more easy targets (of which there are plenty).

    Unless of course there is a secret they absolutely want to have. Then yes, they will likely get in after a while, if the state backing it, is big enough.

    But most hacking is done on easy targets, so "good enough" security means not beeing an easy target, which also means not getting hacked in most of the cases. That is the whole point of "good enough".

  • This reminds me of the US's program against the Soviet Union in Afghanistan (or at least one fictionalised version of it). Supposedly the pitch for funding involved the cost of a US stinger missile being much less than the cost of a Soviet helicopter. If it's an effective means to force a rivalrous actor to waste money, the fact the decision makers don't care about the money they spend could be a counterattack vector.

  • > The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

    I think you have a false perception of the budgetary constraints mid-level state actors are dealing with. Most security agencies have set budgets and a large number of objectives to achieve, so they'll prioritize cost-effective solutions/cheap problems (whereby the cost is both financial and political but finances act as hard constraint). Germany actually didn't buy Pegasus largely because it was too expensive.

    Without Pegasus, Morocco's security apparatus probably wouldn't have the resources otherwise to target such a wide variety of people, ranging from Macron to their own king.

  • Sure, there might be other theoretical concerns beyond just getting to “uneconomical”, but they are all basically irrelevant compared to the fundamental economical problem that you do not spend $1M to force the attacker to spend $10M, you spend $100M to make the attacker spend $1M. We need to start by improving systems by 10,000% to fix that problem before worrying about minutiae like relative willingness to pay.

  • For the likes of NSO there is no “we can’t afford that,” there is only “your Highness, this will cost $MUCH” and for, say, Saudi Arabia the boss might not even blink.

  • "Secure" and "uneconomical" are generally equivalent. A door lock is an _economic_ instrument, that just happens to leverage the laws of physics in its operation. If the NSOs of the world are your enemy, and they are by definition of having you on their list, then you must wisely expend your energy on making their attack more costly or else get eaten.

  • > I'd have to make the costs of hacking me so high that someone at NSO would say "wait a minute, even we can't afford that!

    No really. You just have to do what just happened happen a couple more times and they are finished. If they can't protect their data they have no business, their reputation is destroyed and there's no point of hiring them if a week later the list of the people you are spying leaks. Turn the game around, info security is asymmetric by definition, it's a lot easier to attack than to defend. As a defender you need to plug all possible holes but If you become the attacker you just need to find one.

"the fact that iMessage will gleefully parse all sorts of complex data received from random strangers, and will do that parsing using crappy libraries written in memory unsafe languages."

C. 30 years of buffer overflows.

  • Yep, making companies liable for damages would incentivize them to stop relying on C for a lot of things. Apple knows full well iMessage has security relevant bugs they just haven't found yet. Hence their attempts to shield it and mitigate those issues with layers of security. However, the appropriate action would be to reimplement it in something less likely to get exploited. That's expensive. Liability would justify this cost. Companies like Google, MS, Apple, etc. rely on large amounts of legacy C code. There are quite a few repeat offenders in terms of having security vulnerabilities exploited in the wild.

    Basically, my reasoning here is that Apple knows it is exposing users to hacks because of quality issues with this and other components. The fact that they try to fix them as fast as they find them is nice but not good enough: people still get hacked. When the damage is mostly PR, it's manageable (to a point). But when users sue and start claiming damages, it becomes a different matter: that gets costly and annoying real quick.

    Recently we have seen several companies embrace Rust for OS development. Including Apple even. Both Apple and Google have also introduced languages like Swift and Go that likewise are less likely to have issues with buffer overflows. Switching languages won't solve all the problems but buffer overflows should largely be a thing of the past. So, we should encourage them to speed that process up.

"What can we do to make NSO’s life harder?" That seems pretty simple to me: We ask Western democratic governments (which include Israel) to properly regulate the cybersecurity industry.

This is the purpose of governments; it is why we keep them around. There is no really defensible reason why the chemical, biological, radiological and nuclear industries are heavily regulated, but "cyber" isn't.

  • Nobody has any credible story for how regulations would prevent stuff like this from happening. The problem is simple economics: with the current state of the art in software engineering, there is no way to push the cost of exploits (let alone supporting implant tech) high enough to exceed the petty cash budget of state-level actors.

    I think we all understand that the medium-term answer to this is replacing C with memory-safe languages; it turns out, this was the real Y2K problem. But there's no clear way for regulations to address that effectively; assure yourself, the major vendors are all pushing forward with memory safe software.

    • Well, first of all the NGO group in its current form wouldn't exist if Israel regulated them, at the very least it wouldn't exist as a state-level equivalent actor.

      Second of all if you can't push the costs high enough then it becomes time to limit the cash budget of state level actors. Which is hardly without precedent.

      For some reason you seem to only be looking at this as a technology problem, while at the core it is far more political. Sure technology might help, but that's the raison d'etre of technology.

      11 replies →

    • You're extremely correct, of course, but what I'm really proposing here is something much more boring than actually solving the technical problem(s). How about a dose of good old-fashioned bureaucracy? If you want to sell exploits, in a Western country, then yeah sure you can, but first you should have to go through an approval process and fill in a form for every customer and have them vetted, yada yada.

      This wouldn't do anything to stop companies who base themselves in places like Russia. It wouldn't even really do anything to stop those who base themselves in the Seychelles. But, you want to base yourself in a real bona-fide country, like the USA or France or Israel or Singapore? Then you should have to play by some rules.

      3 replies →

    • Nor does anyone need one, yet. Again, the point of government -- force the dang discussion; that's what investigations, committees, et al are for.

      It's fun to make fun of old people in ties asking (to us) stupid questions about technology in front of cameras, but at the end of the day, it's a crucial step in actually getting something done about all this.

    • If the governments can't ban exploits, perhaps they can ban writing commercial programs in memory unsafe languages? Countries could agree on setting a goal, e.g. that by 2040 all OSs etc. need to use a memory safe language.

  • The whole approach of regulating on the level of "please don't exploit vulnerable systems" seems reactive to me. If the cats out of the bag on a vulnerability and it's just data to copy and proliferate - not much a government can do other than threaten with repercussions which only applies if you get caught.

    The only tractable way to deal with cyber security is to implement systems that are secure by default. That means working on hard problems in cryptography, hardware, and operating systems.

    • By the exact same logic, implementing physical security on the level of "please don't kill vulnerable people" would also be reactive. If the cat's out of the bag on a way to kill people, well, don't we need to implement humans that are unkillable in that way? That's going to mean working on some hard problems...

      No. We don't operate that way, and we don't want to.

      But for us to not operate that way in cyberspace, we need crackers (to use the officially approved term) to be at least as likely to be caught (and prosecuted) as murderers are. That's a hard problem that we should be working on.

      (And, yes, we need to work on the other problems as well.)

      3 replies →

  • > We ask Western democratic governments (which include Israel) to properly regulate the cybersecurity industry.

    That's a bit naive. Governments want surveillance technology, and will pay for it. The tools will exist, and like backdoors and keys in escrow, they will leak, or be leaked.

    The reason why all those other industries are regulated as much as they are is because governments don't need those types weapons they way they need information. It's messy and somewhat distasteful to overthrow an enemy in war, but undermining a government, through surveillance, disinformation, propaganda, until it collapses and is replaced by a more compliant government is the bread-and-butter of world affairs.

    • They want nukes too, which also exist. It doesnt mean theyll get them.

      Non proliferation treaties are effective against nuclear weapons theyd be effective against "cyber" weapons.

      1 reply →

    • The thing is, countries with vast intellectual property base have more to lose in the game, thus they should favor defense over offense. Like Schneier says, we must choose between security for everyone, or security for no-one.

      1 reply →

  • Yeah, it seems kind of silly to start with the fact that the something has caused "the bad thing everyone said would happen" to happen and somehow not see that thing as a blatant security hole in and of itself.

    I mean sure technical solutions are available and do help, but to only look at the technical side and ignore the original issue seems like a mistake.

    • > a blatant security hole in and of itself

      That means our society, our governments, our economic systems are security holes. Everyone saying the Bad Thing would happen did so by looking, not at technology, but at how our world is organized and run. The Bad Thing happened because all those actors behaved exactly as they are designed to behave.

  • This still leaves us with threats from state actors and cybersecurity firms answering only to Eastern, undemocratic governments.

  • “Cyber” is pretty “well” regulated, NSO exports under essentially an arm export license.

  • > to properly regulate the cybersecurity industry

    Regulated Cybersecurity: Must include all mandatory government backdoors.

The 'economic' argument simply doesn't work. Does the author think that every "tin-pot authoritarian" owns a poor country scrabbling in the unproductive desert for scraps? Of course not!

Literally one of the best customers of NSO tools is Saudi Arabia (SA), where money literally bursts out of the ground in the form of crude oil. The market cap of Saudi Aramco is 3x that of Apple's. Good luck making it "uneconomical" for SA to exploit iPhones.

I'll even posit that there is literally no reasonable amount where the government of SA cannot afford an exploitation tool. The governments that purchase these tools aren't doing it for shits and giggles. They're doing it because they believe that their targets represent threats to their continued existence.

Think of it this way, if it costs you a trillion dollars to preserve your access to six trillion dollars worth of wealth, would you spend that? I would, in a heartbeat.

  • I respectfully disagree.

    If we can raise the cost from $100k per target to $10m per target, even SA will reduce the number and breadth of targets.

    They do have limited funds, and they want to see an ROI. At a lower cost, perhaps they’ll just monitor every single journalist who has ever said a bad thing about the king. As that price increases, they’ll be more selective.

    Like Matt said, that’s not ideal. But forcing a more highly targeted approach rather than the current fishing trawler is an incremental improvement.

  • The NSO target list has like 15,000 Mexican phone numbers on it. You don't think making exploits more expensive would force attackers to prioritize only the very highest value targets?

    In the limit, a trillion dollar exploit that will be worthless once discovered will only be used with the utmost possible care, on a very tiny number of people. That's way better than something that you can play around with and target thousands.

    https://www.theguardian.com/news/2021/jul/19/fifty-people-cl...

  • 100%, the argument is perfect in it's circularity: we should it make it uneconomical for there to be iMessage exploits via fixing iMessage exploits

  • Computer security isn't a board game where my unit can Damage your unit if my unit has more Combat than your unit has Defense, and once your unit is Damaged enough you lose it, and you can buy a card with 5 Combat for 5 Gold, and so on. It's not a contest of strength. It's not about who has the most gold. It's about who fucks up.

    If you follow the guidelines in http://canonical.org/~kragen/cryptsetup to encrypt the disk on a new laptop, it will take you an hour (US$100), plus ten practice reboots over the next day (US$100), plus 5 seconds every time you boot forever after (say, another US$100), for a total of about US$300. A brute-force attack by an attacker who has killed you or stolen your laptop while it was off is still possible. My estimate in that page is that it will cost US$1.9 trillion. That's the nature of modern cryptography. (The estimate is probably a bit out of date: it might cost less than US$1 trillion now, due to improved hardware.)

    Other forms of software security are considerably more absolute. Regardless of what you see in the movies, if your RAM is functioning properly and if there isn't a cryptographic route, there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T. It's like trying to find a number that when multiplied by 0 gives you 5. The money you spend on attacking the problem is simply irrelevant.

    Usually, though, the situation is considerably more absolute in the other direction: there are always abundant holes in the protections, and it's just a matter of finding one of them.

    Now, of course there are other ways someone might be able to decrypt your laptop disk, other than stealing it and applying brute force. They might trick you into typing the passphrase in a public place where they can see the surveillance camera. They might use a security hole in your browser to gain RCE on your laptop and then a local privilege escalation hole to gain root and read the LUKS encryption key from RAM. They might trick you into typing the passphrase on the wrong computer at a conference by handing you the wrong laptop. They might pay you to do a job where you ssh several times a day into a server that only allows password authentication, assigning you a correct horse battery staple passphrase you can't change, until one day you slip up and you type your LUKS passphrase instead. They might steal your laptop while it's on, freeze the RAM with freeze spray, and pop the frozen RAM out of your motherboard and into their own before the bits of your key schedule decay. They might break into your house and implant a hardware keylogger in your keyboard. They might do a Zoom call with you and get you to boot up the laptop so they can listen to the sound of you typing the passphrase on the keyboard. (The correct horse battery staple passphrases I favor are especially vulnerable to that.) They might remotely turn on the microphone in your cellphone, if they have a way into your cellphone, and do the same. They might use phased-array passive radar across the street to measure the movements of your fingers from the variations in the reflection of Wi-Fi signals. They might go home with you from a bar, slip you a little scopolamine, and suggest that you show them something on your (turned-off) laptop while they secretly film your typing.

    The key thing about these attacks is that they are all cheap. Well, the last one might cost a few thousand dollars of equipment and tens of thousands of dollars in rent. None of them requires a lot of money. They just require knowledge, planning, and follow-through.

    And the same thing is true about defenses against this kind of thing. Don't run a browser on your secure laptop. Don't keep it in your bedroom. Keep your Bitcoin in a Trezor, not your laptop (and obviously not Coinbase), so that when your laptop does get popped you don't lose it all.

    You could argue that, with dollars, you can hire people who have knowledge, do planning, and follow through. But that's difficult. It's much easier to spend a million (or a billion, or a trillion) dollars hiring people who don't. In fact, large amounts of money is better at attracting con men, like antivirus vendors, than it is at attracting people like the seL4 team.

    Here in Argentina we had a megalomaniacal dictator in the 01940s and 01950s who was determined to develop a domestic nuclear power industry, above all to gain access to atomic bombs. Werner Heisenberg was invited to visit in 01947; hundreds of German physicists were spirited out of the ruined, occupied postwar Germany. National laboratories were built, laboratory-scale nuclear fusion was announced to have been successful, promises to only seek peaceful energy were published, plans for a nationwide network of fusion energy plants were announced, hundreds of millions of dollars were spent (in today's money), presidential gold medals were awarded...

    ...and finally in 01952 it turned out to be a fraud, or at best the kind of wishful-thinking-fueled bad labwork we routinely see from the free-energy crowd: https://en.wikipedia.org/wiki/Huemul_Project

    Meanwhile, a different megalomaniacal dictator who'd made somewhat better choices about which physicists to trust detonated his first H-bomb in 01953.

    • > there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T.

      Nitpick: for only about US$1M (give or take a order of magnitude or two depending on location), the process (assuming network access) can hire a assassin to kill you, pull up a shell on your computer, and give the process whatever priviledges it wants.

      1 reply →

One possible technical improvement is high quality honeypots. If Apple tried hard, they could arrange for certain iPhones to have instrumentation intended to detect and characterize these sorts of attacks. If every targeted user has a 0.1% chance of leaking the exploit vector to Apple, then mass exploitation becomes much more complex and expensive.

Doing this well would be hard, but even an imperfect implementation would have some value.

  • It might be hard to convince the privacy engineers to allow us access to a random sample of message attachments. What if we asked for a temporary 'root' access credential, that is only valid for 3 minutes per day?

NSO == arms dealers, by their own admission. They did not create the market for digital arms, but successfully cater to it. No HN comment will change their business model. They benefit from the easy distribution of software twice. Once as an exploit developer, because all target systems look alike (recall hardware and software vendors also want to write hardware and software once and then distribute widely) and therefore an exploit must only be developed once to apply broadly. Then, a second time as a software developer, because they can sell their same software to multiple clients. Having worked on Pegasuses, the thing that is dreaded the most and is very costly is a rewrite. Those get financially prohibitive. If the world was serious about stopping the NSOs of the world, it would work toward efficiently (read: inexpensively) making different individual systems wildly different yet remaining interoperable (because the interoperability is where the network effect comes in, providing value in communication systems and leading to their wider adoption). The conflict to solve is how to make systems interoperable and non-interoperable at the same time. While it is easy to imagine randomized instruction sets, Morpheus-like-blindly-randomize-everything chips and bytecode VMs that use binary translation to modify the phenotype at each individual system, it is not so easy to envision how systems could be written once to interoperate yet prevent BORE-type attacks whereby the one time exploit development cost can be easily offset by repeat exploitation. The only way forward is to find that lever which gives defenders a cheap button to push that forces an expensive Pegasus rewrite.

There's plenty of evidence that this type of attack surface (parsers operating on untrusted data received over the Internet) is fixable, even at Big Tech scale. The most obvious example is Microsoft Office in the early 2000s and the switch to the XML-based format with newer, easier-to-implement and ideally memory-safer parsers. That's not to say there's no bugs in Office anymore, but it's certainly much much better than it was.

Microsoft figured it out. Apple can do it, too.

The point that the author makes is very valuable, it is important to not throw out hands in the air. If you are not moving forward, you are falling back.

Though one (perhaps nit-picky) point I'd like to make is that these dictators are not dumb. They are incredibly intelligent. They themselves are probably not hackers, but they understand people and power. They are going do what they can to get what they want. We can't ignore the factor they play in creating these problems, and we need to take it just as seriously as we would a technical security exploit.

  • You don't have to be "incredibly intelligent" to pay some company to hack a list of your enemies. That just takes money and a list of people you hate, not insight.

  • Not to split hairs here, but lets separate this into a few different traits. Smart, intelligent and cunning.

    Most dictators are not very intelligent. Just like Donald Trump is not very intelligent.

    Cunning and with social smarts would be apt. These guys really know how to play people off each other, and manipulate, like, really well.

How does a bug in iMessage lead to my iPhone being completely taken over by Pegasus? I thought apps were sandboxed on iOS.

Or can they only monitor SMS/iMessages with this entry point?

The immediate takeaway from this article is that if any of these threat actors are on your radar and you have an I phone then you should delete iMessage.

  • I just disabled iMessages and FaceTime ... I don't actively use them anyway and I have a feeling Apple applies more scrutiny to 3rds party apps than theirs (Whatsapp , signal ...)

    I don't know if disabling iMessages is enough though in this case.

The security attack that scares me more than any other is rough men with guns kidnapping me in the middle of the night and then torturing me until I reveal my security material. While normally torture just results in the victim saying anything to make it stop, in the specific case where the attacker has encrypted material and can test key extraction in real-time torture is highly effective.

  • There’s a canonical term for this: rubber hose cryptography. That’s when you beat someone with a rubber hose until they give you the key. It’s effective against a wide range of algorithms and constructions.

    • The technical solution is having very available, very believable lies. Something where you can "reveal" false secrets that decrypt to believable data by your attacker.

      This is generally hard. Because you gotta know, at the time of being tortured, which fake secret will give believable results.

The only practical security is security through isolation, like what Qubes OS provides. Security through correctness is impossible.

  • With Qubes you are depending on the correctness of the VM and whatever hardware it is running on. Modern chips are really complex.

    The only perfectly secure computer is one that is off. Security is always about probabilities and trade offs. As you approach perfection cost approaches infinity. It’s similar to adding “nines” to your uptime.

    A good security policy balances cost with security and also has plans in place for what to do if security is compromised.

  • Stupid question: how do you know your isolation is correct?

    • Not stupid question at all. Nothing is 100% correct. Instead, you look at the attack surface, which for Qubes is extremely small: no network in AdminVM, only 100k lines of code in Xen supervisor, hardware virtualization with extremely low number of discovered escapes and so on.

      3 replies →

    • You test for it with rigor and incorporate new learning, just like every other engineering discipline.

    • There have been Qubes-breaking bugs in Xen before, and it wouldn't be surprising to see more.

  • You seem to have missed the point of the article completely.

    We can’t achieve perfect security (there’s no such thing). What we can achieve is raising the bar for attackers. Simple things like using memory-safe languages for handling untrusted inputs, least-privilege design, defense in depth, etc.

    • Memory-safe languages are good, but decreasing the attack surface through compartmentalization is much more reliable I think.

> In a perfect world, US and European governments would wake up and realize that arming authoritarianism is really is bad for democracy

Well, they haven't done that since what, the early XX century, what with "he may be a son of a bitch, but he's our son of a bitch"? If the US really cared that much about "democracy", perhaps they'd have by now sorted it out in, say, Mexico already: after all, that's their closest-connected country after Canada, right? Yet the last time I've checked the news about upcoming Mexican elections, there were quite a lot of dead/missing opposition candidates reported (just as it was during the previous elections, and pre-previous, and...) Interestingly enough, the US press doesn't report much on that: apparently things in Afghanistan, across half the globe, are much more important for home security.

Can someone explain why Blastdoor has been unsuccessful? Is it too hard a problem to restrict what iMessage can do?

> Notably, these targets include journalists and members of various nations’ political opposition parties

For all we know it also included cryptographers and security researchers. Unfortunately, the list hasn't been published-- so we only know what the journalists who had access to it cared to look up.

I don't care the slightest about computer security as long as governments won't try to ask experts to write laws and security standards. Same goes for insurance companies, who should try to sell cyber insurance and lobby the government for security standards.

There are plenty security standards for many things that are not computers, yet cyber is a weird exception.

My understanding is both a liberal silicon valley state of mind, combined with the NSA benefiting from low security standards and having a monopoly on tech companies.

In my view, the computer security industry is too blame here, because they benefit from chaos and a lack of government intervention.

> The problem that companies like Apple need to solve is not preventing exploits forever, but a much simpler one: they need to screw up the economics of NSO-style mass exploitation.

On the one hand, sure, make it too expensive to do this. On the other hand, how much more expensive is too expensive? When the first SHA1 collision attack was found, it was considered a problem, and SHA1 was declared unsuitable for security purposes, but now it's cheap.

The article states:

>"A more worrying set of attacks appear to use Apple’s iMessage to perform “0-click” exploitation of iOS devices. Using this vector, NSO simply “throws” a targeted exploit payload at some Apple ID such as your phone number, and then sits back and waits for your zombie phone to contact its infrastructure."

Does anyone have a link or any resources that describe how this “0-click” exploitation" works?

... against nihilism? They're just sort of handwaving and saying, "Well, uh... we should do better, somehow... and expect Apple to do better, and... uh..." How's that any different from saying "The problem is basically impossible"?

The core of the problem is complexity. Our modern computing stack can be broadly described as:

- Complexity to add features. - Complexity to add performance. - Complexity to solve problems with the features. - Complexity to solve problems created from the performance complexity. - Complexity added to solve the issues the previous complexity created.

And this has been iterating over, and over, and over... and over. The code gets more complex, so the processors have to be faster, which adds side channel issues, so the processors get more complex to solve that, as does the software, hurting performance, and around you go again.

At no point does anyone in the tech industry seem to step back and say, "Wait. What if we simplify instead?" Delete code. Delete features. I would rather have an iPhone without iMessage zero click remote exploits than one with animated cartoons based on me sticking my tongue out and waggling my eyebrows, to pick on a particularly complex feature.

I've made a habit of trying to run as much as I can on low power computers, simply to see how it works, and ideally help figure out the choke points. Chat has gotten comically absurd over the years, so I'll pick on it as an example of what seems, to me, to be needless complexity.

Decades ago, I could chat with other people via AIM, Yahoo, MSN, IRC, etc. Those clients were thin, light, and ran on a single core 486 without anything that I recall as being performance issues.

Today, Google Chat (having replaced Hangouts, which was its own bloated pig in some ways) struggles to keep up with typing on a quad core, 1.5GHz ARM system (Pi 4). It pulls down nearly 15MB of resources - or roughly 30% of a Windows 95 install. To chat with someone person to person, in the same way AIM did decades ago. I'm more used to lagged typing in 2021 than I was in 1998.

Yes, it's got some new features, and... I'm sure someone could tell me what they are, but in terms of sending text back and forth to people across the internet, along with images, it's fundamentally doing the exact same thing that I did 20 years ago, just using massively more resources, which means there are massively more places for vulnerabilities, exploits, bugs, etc, to hide. Does it have to be that huge? No idea, I didn't write it. But it's larger and slower than Hangouts, to accomplish, as far as I'm concerned, the same things.

We can't just keep piling complexity on top of complexity forever and expect things to work out.

Now, if I wanted to do something like IRC, which is substantially unchanged from the 90s, I can use a lightweight native client that uses basically no CPU and almost no memory to accomplish this, on an old Pi3 that has an in-order CPU with no speculation, and can run a rather stripped down kernel, no browser, etc. That's going to be a lot harder to find bugs in than the modern bloated code that is most of modern computing.

But nobody gets promoted for stripping out code and making things smaller these days, it seems.

As long as the focus is on adding features, that require more performance, we're simply not going to get ahead of the security bugs. And, if everyone writing the code has decided that memojis are more important than security iMessage against remote zero click exploits, well... OK. But the lives of journalists are the collateral damage of those decisions.

These days, I regularly find myself wondering why I bother with computers at all outside work. I'd free up a ton of "overhead maintenance time" I spend maintaining computers, and that's before I get into the fact that even with aggressive attempts to tamp down privacy invasions, I'm sure lots of my data is happily being aggregated for... whatever it is people do with that, send ads I block, I suppose.

  • The bugs we're talking about have almost nothing to do with the underlying message transport, but rather the features built on top of it. Replacing iMessage with IRC wouldn't solve anything.

    • No, but my point is about complexity.

      If all iMessage allowed were ASCII text strings, do you think it would have nearly the same attack surface as it does now, allowing all the various things it supports (including, if I recall properly, some tap based patterns that end up on the watch)?

      In a very real sense, complexity (which is what features are) is at odds with security. You increase the attack surface, and you increase the number of pieces you can put together into weird ways that were never intended, but still work and get the attacker something they want.

      If there were some toggle to disable parsing everything but ASCII text and images in iMessage, I'd turn it on in a heartbeat.

      10 replies →

  • Well put. The market values features. With present system engineering approaches, the path of least resistance is to add complexity to enable said features and reap the financial rewards. It takes more effort to build smaller attack surfaces, so nature tends to avoid that path. Regulation helps little. Security is not additive, it is subtractive. Less is more. There is very little incentive to simplify, except in niche segments. So, zero surprise commodity systems fail so horrendously.

  • This is a really good point here. Most corporate development that I have experienced is centered around "features" and speed. "I'm working on a new feature", "there has been a feature request", "the feature has a bug." The only time the complexity of the project is considered is by the time it fails and the team is canned.

An industry exists when there's market, in other words when there's supply and demand. Difficult hacking increases demands and more players will join the game.

Until something changes how the Internet works, the very moment we sent something across the Internet through a service, we no longer have control over the data. Pegasus is just the tip of an iceberg and with technology become closer to us, i.e. homepods, smart electronic appliances, it is just time until all the big brands are hiring hackers to spy on their customers so that they can produce better product (make more money).

Going forward, privacy is not supposed to be a personal preference like how platforms nowadays makes us click through different settings to opt out, it is supposed to be something we all collective have out of the box and we need to work together towards that goal.

The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

Treating this like a technology issue is a mistake. Tech fixes are band-aids. This is a social and governance issue that needs social and governance solutions.

Should iMessage do what Facebook messenger does and request receiver permission before letting a new contact message them?

Apple seems incapable of successfully sandboxing iMessage after years of exploits. At this point I think we just have to assume they just don't care.

  • This doesn’t doesn’t seem like a stable fact about Apple. Their priorities can change. I expect that these recent revelations have gotten the attention of top management and they are likely to get a strong organizational response.

    (I’m reminded of Google’s responses to the Snowden leak.)

  • You're not entirely wrong. For a proprietary, closed-source, limited-access system that Apple has complete control of, it's surprisingly vulnerable and slow to be patched.

    • It's pretty much completely wrong. Apple invests more on this problem than almost any vendor in the world, and no vendor with a comparable footprint fares meaningfully better than they do --- Google surpasses them at some problems, but vice versa. The problems we're dealing with here are basically at the frontier of software engineering and implicate not so much Apple as the entire enterprise of commercial consumer software development, no matter where it's practiced.

      It's fair to criticize Apple. But you can't reasonably argue that they DGAF.

      1 reply →

Don't own or relies on phone is the first step of security. We can't trust phone, all of them.

The article correctly refutes the silly binary argument that many people fall back on that since perfection is impossible, we must accept an imperfect solution. And since the current solutions are clearly imperfect, the status quo must be acceptable since imperfect solutions are acceptable.

However, the article falls right into the next failed model of considering everything in terms of relative security. We should make things “better”, we should make things “harder”, but those terms mean very little. 1% better is “better”. Making a broken hashing function take 2x as long to break makes things “harder”, but it does not make things more secure since it is already hopelessly inadequate. The problem with considering things only in relative terms to existing solutions is that it ignores defining the problem, and more importantly, it does not tell you if you solved your problem.

The correct model is the one used by engineering disciplines, specifying objective, quantifiable standards for what is adequate and then verifying the solution passes those standards. Because if you do not define what is adequate, how do you know if you have even achieved the bare minimum of what you need and how far your solution may be from that.

For instance, consider the same NSO case as the article. Did Apple do an adequate job, what is an adequate job, and how far away are they?

Well, let us assume that the average duration of surveillance for the 50,000 phones was 1 year per phone. Now what is a good level of protection against that kind of surveillance? I think a reasonable standard is making it so the phone is not the easiest way to surveil a person for that length of time, it is cheaper to do it the old fashioned way, so the phone does not make you more vulnerable on average. So, how much does it cost to surveil a person and listen in on their conversations for a year the old fashioned way? 1k, 10k, 100k? If we assume 10k, then the level of security needed to protect against NSO type threats and to adequately protect against surveillance is $500M.

So, how far away is Apple from that? Well, Zerodium pays $1.5M per iMessage zero click [1]. If we assume they burned 10 of them, infecting a mere 5k per with a trivially wormable complete compromise, that would amount to ~15M at market price. Adding in the rest of the work, it would maybe cost $20M all together worst case. So, if you agree with this analysis (if you do not feel free to plug in your own estimates), then Apple has achieved ~4% of the necessary level and would need to improve processes by 2,500% to achieve adequate security against this type of attack. I think that should make it clear why things are so bad. “Best in class” security needs to improve by over 10x to become adequate. It should be no wonder these systems are so defenseless.

[1] http://zerodium.com/program.html

I am not convinced apple wasn’t aware of what NSO was doing?

Governments want access to spy on people. Apple wants to market and sell a “secure” mobile device.

In a way NSO provides apple with a perfect out. They can legally claim they are secure platform and do not work with bad actors or foreign governments to “spy”.

Hear no evil see no evil. NSO ability to penetrate iOS gives powerful governments what they want and in a way may keep “pressure” off Apple in providing official back door access.

  • That would be a surprise to literally every person who works in Ivan's organization at Apple. This is message-board-think, not analysis.

  • How would Apple benefit from this conspiracy? Apple these days is throwing out the "secure" and "private" PR to differentiate themselves from their surveillance capitalist neighbors. They would never do anything that hurts their PR, and they would always do the bare minimum security and privacy to support it. If they knew of these vulnerabilities ahead of time, I am positive they would have patched them. This is not to say that they are doing everything they can, but I don't see your proposed conspiracy following profit-maximizing corporate logic. Apple isn't there to screw you; they're just there for the big buck.

    • The benefit would be they could still market the device and the most “secure” but at the same time duck pressure from governments to allow back door access.

      If Apple truly secured the OS to the point where even state level actors could not access they bring some unwanted attention, regulation, etc from power governmental agencies.

      Apple has also been interestingly silent /vague on a response to this story.

This thread devolved into a debate over C versus Rust or other memory-safe languages.

1. Blaming the language instead of the programmer will not lead to improved program quality.

2. C will always be available as a user's language. Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages

3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code, of which a ton is written in C. Programmers in the future who are schooled only in memory-safety languages may not be able to approach C as a learning resource, and may in fact be taught to fear it.

There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors. It is amazing how easily that work is ignored in these debates. Find me a buffer overflow or use-after-free in one of djb's programs.

  • > 1. Blaming the language instead of the programmer will not lead to improved program quality.

    I disagree. Blaming the language is critically important. Tony Hoare (holds a turing aware, is a genius) puts it well.

    > a programming language designer should be responsible for the mistakes that are made by the programmers using the language. [...]

    > It's very easy to persuade the customers of your language that everything that goes wrong is their fault and not yours. I rejected that...

    [0]

    > Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages

    Users will always write C. No they won't always be smaller and faster.

    > 3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code

    Much to society's loss, I'm sure.

    > and may in fact be taught to fear it

    Cool. Same way we teach people to not roll their own crypto. This is a good thing. Please be more afraid.

    > There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors.

    No one cares. Not only is that not provably the case, nor is it likely the case, but it's also irrelevant when I'm typing on a computer with a C kernel, numerous C libraries, in a C++ browser, or texting someone via a C++ app that has to parse arbitrary text, emojis, videos, etc.

    > Find me a buffer overflow or use-after-free in one of djb's programs.

    No, that's a stupid waste of my time. Thankfully, others seem more willing to do so[1] - I hate to even entertain such an arbitrary, fallacious benchmark, but it's funny so I'll do it just this once.

    [0] http://blog.mattcallanan.net/2010/09/tony-hoare-billion-doll...

    [1] http://www.guninski.com/where_do_you_want_billg_to_go_today_...

    • Guess who is "rolling your crypto" for you. The same guy who is the subject of your "fallacious benchmark". He writes in C. The problem with software is not the languages (seems a new one is created every month), it is a lack of competence in using them. (Over)Confidence is very commmon, competence not so much. I am thankful that Dennis Ritchie created C; I appreciated his humility and I do not blame him for others' mistakes.

      8 replies →

"An entirely separate area is surveillance and detection: Apple already performs some remote telemetry to detect processes doing weird things. This kind of telemetry could be expanded as much as possible while not destroying user privacy. While this wouldn't necessarily stop NSO, it would make the cost of throwing these exploits quite a bit higher - and make them think twice before pushing them out to every random authoritarian government."

Apple could do more spying (excuse me, "telemetry") "as much as possible" in addition to NSO... because it would make the the competitor's spying more expensive.

This could be a unilateral decision to be made by Apple without input from users, as usual.

Any commercial benefits to Apple due to the incresed data collection would be purely incidental, of course.

Apple and NSO may have different ways of making money, but they both use (silent) data collection from computer users to help them.

This article doesn't seem to have a direction, it just seems to be a lump of refutations about how hard it is to maintain a secure system, and how we need to be understanding throughout this process. What it doesn't actually address is security nihilism, so let's expand on the seed he plants in the final section:

> It’s the scale, stupid

This should 100% be the focus, not how truly amicable Apple's efforts are to improve security. Security nihilism is entirely about scale, and understanding your place in the digital pecking order. The only way to be 'secure' in that sense is to directly limit the amount of personal information that the surrounding world has on you: in most first-world countries, it's impossible to escape this. Insurance companies know your medical history before you even apply for their plan, your employer will eventually learn about 80% of your lifestyle, and the internet will slowly sap the rest of the details. In a world where copying is free, it's undeniable that digital security is a losing game.

Here's a thought: instead of addressing security nihilism in the consumer, why don't you highlight this issue in companies? There's currently no incentive to hack your phone unless it has valuable information that can't be found anywhere else: in which case, you have more of a logistics issue than a security one. Meanwhile, ransomware and social-engineering attacks are at an all-time high, yet our security researchers are taking their time to hash out exactly how mad we deserve to be at Apple for their exploit-of-the-week. If this is the kind of attitude the best-of-the-best have, it's no wonder we're the largest target for cyberattacks in the world.

  • > The only way to be 'secure' in that sense is to directly limit the amount of personal information that the surrounding world has on you

    I may misunderstand you but this is privacy, not security. The 2 are not completely separated, but that’s another issue.

"Security nihilism" sounds like a dig, but actually, it's the same thing as "security realism". All security eventually fails, thus all security is eventually meaningless. Not only can you not have perfect security, but the entirety of security is just a hedge against the will of your opponent. Any kind of security - from a file cabinet lock to a nuclear weapon - only stops people less motivated than you. I don't care what thing you think you've created, it can be defeated, one way or another.

You know what you could do to make NSO's life harder, other than develop more security? Fight them. Have your politicians attack Israel for allowing the group to operate. Use sanctions, take a proportional response, condemn their actions at the UN, stop protecting them. Block Israel from the Apple Store. It's a more direct route, and more likely to succeed than making your goal "perfect security". (Would there be huge political challenges? Of course; but those are more approachable than "perfect security")

  • >You know what you could do to make NSO's life harder, other than develop more security? Fight them. Have your politicians attack Israel for allowing the group to operate. Use sanctions, take a proportional response, condemn their actions at the UN, stop protecting them. Block Israel from the Apple Store. It's a more direct route, and more likely to succeed than making your goal "perfect security". (Would there be huge political challenges? Of course; but those are more approachable than "perfect security")

    but it's not just NSO, every reasonable country probably has people like them.