Comment by bflesch
11 hours ago
Slightly unrelated, but aren't these AES-specific custom CPU instructions just a way to easily collect the encryption keys? There is a speedup but is it worth the risks?
If I were a nation state actor, I'd just store the encryption keys supplied to the AES CPU instruction somewhere and in case the data needs to be accessed you just read the stored keys.
No need to waste time deploying a backdoored CPU firmware and wait for days or weeks, and then touch the hardware a second time to extract the information.
When all AES encryption keys are already stored somewhere on the CPU, you can easily do a drive-by readout at any point in time.
Linux kernel has a compile time flag to disable use of custom CPU instructions for encryption, but it can't be disabled at runtime. If "software encryption" is used, the nation state actor needs to physically access the device at least two times or use a network-based exploit which could be logged.
There are serious risks about backdoors in CPUs, but they are not about the CPU gathering the AES keys.
The storage required for this would be humongous and the CPU cannot know for which data the keys have been used. Moreover this would too easily be defeated, because even if the AES instructions allow to specify a derived round key in them, you can always decline to do this and use a separate XOR instruction for combining the round keys with the intermediate states. Detecting such a use would be too difficult.
No, there is no base for fearing that the AES keys can be stored in CPUs (on the other hand you should fear that if you store keys in a TPM, they might never be erased, even if you demand this). The greatest possible danger of adversarial behavior of a CPU exists in the laptop CPUs with integrated WiFi interfaces made by Intel. Unless you disconnect the WiFi antennas, it is impossible to be certain that the remote management feature of the WiFi interface is really disabled, preventing an attacker to take control of the laptop in a manner that cannot be detected by the operating system. The next danger by importance is in the computers that have Ethernet interfaces with the ability to do remote management, where again it is impossible to be certain that this feature is disabled. (A workaround for the case when you connect such a computer to an untrusted network, e.g. directly to the Internet, is to use a USB Ethernet interface.)
I am not a chip designer but from my limited understanding, this "somewhere" is the problem. You can have secret memory somewhere that isn't noticed by analysts, but can it remain secret if it is as big as half the cpu? A quarter? How much storage can you fit in that die space? How many AES keys do you handle per day? Per hour of browsing HN with AES TLS ciphers? (Literally all supported ciphers by HN involve AES)
We use memory-hard algorithms for password storage because memory is more expensive than compute. More specifically, it's die area that is costly, but at least the authors of Argon2 seem to equate the two. (If that's not correct, I based a stackoverflow post or two on that paper so please let me know.) It sounds to me like it's easily visible to a microscope when there's another storage area as large as the L1 cache (which can hold a few thousand keys at most... how to decide which ones to keep)
Of course, the cpu is theoretically omnipotent within your hardware. It can read the RAM and see "ah, you're running pgp.exe, let me store this key", but then you could say the same for any key that your cpu handles (also rsa or anything not using special cpu instructions)
Good points, but might be mitigated by knowing that the first key after boot is for HDD encryption and if storage is limited then keep counter for each key, and always overwrite least frequently observed key.
Could work. How do you know what the least-frequently used key is if you can't store them, though? Would need some heuristics. Maybe it could write the first five keys it sees after power on on every power on, or some other useful heuristic.
Like, I do take your point but it does seem quite involved for the chance that it'll get them something useful, and they still need to gain physical access to the intact device, and trust that it never gets out or the chipmaker's reputation is instantly trash and potentially bankrupt. And we know from Snowden documents that, at least in ~2013 (when aes extensions weren't new, afaik), they couldn't decrypt certain ciphers which is sorta conspicuous if we have these suspicions. It's a legit concern or thing to consider, but perhaps not for the average use-case
edit: nvm it was proposed in 2008, so that it didn't show up yet in ~2013 publications is not too surprising. Might still be a general point about that 'they' haven't (or hadn't) infiltrated most cpus in general
I don't imagine it would be too difficult to snoop the instruction stream to identify a software implementation of AES and yoink the keys from it, at least if the implementation isn't obfuscated. If your threat model includes an adversarial CPU then you probably need to at least obfuscate your implementation, if not entirely offload the crypto to somewhere you trust.