Comment by ENOTTY
4 years ago
The 'economic' argument simply doesn't work. Does the author think that every "tin-pot authoritarian" owns a poor country scrabbling in the unproductive desert for scraps? Of course not!
Literally one of the best customers of NSO tools is Saudi Arabia (SA), where money literally bursts out of the ground in the form of crude oil. The market cap of Saudi Aramco is 3x that of Apple's. Good luck making it "uneconomical" for SA to exploit iPhones.
I'll even posit that there is literally no reasonable amount where the government of SA cannot afford an exploitation tool. The governments that purchase these tools aren't doing it for shits and giggles. They're doing it because they believe that their targets represent threats to their continued existence.
Think of it this way, if it costs you a trillion dollars to preserve your access to six trillion dollars worth of wealth, would you spend that? I would, in a heartbeat.
I respectfully disagree.
If we can raise the cost from $100k per target to $10m per target, even SA will reduce the number and breadth of targets.
They do have limited funds, and they want to see an ROI. At a lower cost, perhaps they’ll just monitor every single journalist who has ever said a bad thing about the king. As that price increases, they’ll be more selective.
Like Matt said, that’s not ideal. But forcing a more highly targeted approach rather than the current fishing trawler is an incremental improvement.
The NSO target list has like 15,000 Mexican phone numbers on it. You don't think making exploits more expensive would force attackers to prioritize only the very highest value targets?
In the limit, a trillion dollar exploit that will be worthless once discovered will only be used with the utmost possible care, on a very tiny number of people. That's way better than something that you can play around with and target thousands.
https://www.theguardian.com/news/2021/jul/19/fifty-people-cl...
100%, the argument is perfect in it's circularity: we should it make it uneconomical for there to be iMessage exploits via fixing iMessage exploits
Computer security isn't a board game where my unit can Damage your unit if my unit has more Combat than your unit has Defense, and once your unit is Damaged enough you lose it, and you can buy a card with 5 Combat for 5 Gold, and so on. It's not a contest of strength. It's not about who has the most gold. It's about who fucks up.
If you follow the guidelines in http://canonical.org/~kragen/cryptsetup to encrypt the disk on a new laptop, it will take you an hour (US$100), plus ten practice reboots over the next day (US$100), plus 5 seconds every time you boot forever after (say, another US$100), for a total of about US$300. A brute-force attack by an attacker who has killed you or stolen your laptop while it was off is still possible. My estimate in that page is that it will cost US$1.9 trillion. That's the nature of modern cryptography. (The estimate is probably a bit out of date: it might cost less than US$1 trillion now, due to improved hardware.)
Other forms of software security are considerably more absolute. Regardless of what you see in the movies, if your RAM is functioning properly and if there isn't a cryptographic route, there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T. It's like trying to find a number that when multiplied by 0 gives you 5. The money you spend on attacking the problem is simply irrelevant.
Usually, though, the situation is considerably more absolute in the other direction: there are always abundant holes in the protections, and it's just a matter of finding one of them.
Now, of course there are other ways someone might be able to decrypt your laptop disk, other than stealing it and applying brute force. They might trick you into typing the passphrase in a public place where they can see the surveillance camera. They might use a security hole in your browser to gain RCE on your laptop and then a local privilege escalation hole to gain root and read the LUKS encryption key from RAM. They might trick you into typing the passphrase on the wrong computer at a conference by handing you the wrong laptop. They might pay you to do a job where you ssh several times a day into a server that only allows password authentication, assigning you a correct horse battery staple passphrase you can't change, until one day you slip up and you type your LUKS passphrase instead. They might steal your laptop while it's on, freeze the RAM with freeze spray, and pop the frozen RAM out of your motherboard and into their own before the bits of your key schedule decay. They might break into your house and implant a hardware keylogger in your keyboard. They might do a Zoom call with you and get you to boot up the laptop so they can listen to the sound of you typing the passphrase on the keyboard. (The correct horse battery staple passphrases I favor are especially vulnerable to that.) They might remotely turn on the microphone in your cellphone, if they have a way into your cellphone, and do the same. They might use phased-array passive radar across the street to measure the movements of your fingers from the variations in the reflection of Wi-Fi signals. They might go home with you from a bar, slip you a little scopolamine, and suggest that you show them something on your (turned-off) laptop while they secretly film your typing.
The key thing about these attacks is that they are all cheap. Well, the last one might cost a few thousand dollars of equipment and tens of thousands of dollars in rent. None of them requires a lot of money. They just require knowledge, planning, and follow-through.
And the same thing is true about defenses against this kind of thing. Don't run a browser on your secure laptop. Don't keep it in your bedroom. Keep your Bitcoin in a Trezor, not your laptop (and obviously not Coinbase), so that when your laptop does get popped you don't lose it all.
You could argue that, with dollars, you can hire people who have knowledge, do planning, and follow through. But that's difficult. It's much easier to spend a million (or a billion, or a trillion) dollars hiring people who don't. In fact, large amounts of money is better at attracting con men, like antivirus vendors, than it is at attracting people like the seL4 team.
Here in Argentina we had a megalomaniacal dictator in the 01940s and 01950s who was determined to develop a domestic nuclear power industry, above all to gain access to atomic bombs. Werner Heisenberg was invited to visit in 01947; hundreds of German physicists were spirited out of the ruined, occupied postwar Germany. National laboratories were built, laboratory-scale nuclear fusion was announced to have been successful, promises to only seek peaceful energy were published, plans for a nationwide network of fusion energy plants were announced, hundreds of millions of dollars were spent (in today's money), presidential gold medals were awarded...
...and finally in 01952 it turned out to be a fraud, or at best the kind of wishful-thinking-fueled bad labwork we routinely see from the free-energy crowd: https://en.wikipedia.org/wiki/Huemul_Project
Meanwhile, a different megalomaniacal dictator who'd made somewhat better choices about which physicists to trust detonated his first H-bomb in 01953.
> there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T.
Nitpick: for only about US$1M (give or take a order of magnitude or two depending on location), the process (assuming network access) can hire a assassin to kill you, pull up a shell on your computer, and give the process whatever priviledges it wants.
Normally it wouldn't have network access, but that's an excellent point—generalized, once programs can start having physical-world effects that loop around to affect the computer they're running on, you can no longer make such adamantium-clad guarantees. And, as Rowhammer and various passive-emission attacks show, it's not uncommon in practice for the ordinary execution of the program to have such effects.
Still, this kind of thing isn't always applicable. If the seL4 kernel in question is on orbit, or running on a computer at an unknown location, or in a submarine, or in a drone in flight, the assassin can't in practice sit down at the console. And if it's running on something like the Secure Enclave chip in an iPhone, or a permissive action link, physical access may be impractically difficult regardless of who you kill.