Well, also used for confidential computing and other stuff that you might benefit from too, so not just to gatekeep stuff. Depending on what you use it for (or rather, what your computer is using it for), you might not want it broken in all cases.
With that said, I'd rather see it broken than not, considering it's mostly used for negative stuff, and it isn't open enough to evaluate if it actually is secure enough.
The purpose of secure enclave is to prevent administrator from accessing the data. I don't want anyone doing "confidential computing" on my devices. I am the person which can be trusted so there is no need to hide the encryption keys from me.
With the rise of "passkeys" that every single website is cramming down our throats now, aren't those also stored in the secure enclave? AKA the keys to your entire encrypted data and digitized life?
I look forward to recordings of the scam calls, where they ask the victim to "place a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into".
No, these should exist in the TPM and highly volatile memory like CPU cache. This including the decryption code. This can be achieved using mechanisms similar to what Coreboot does before RAM is initialized.
No need for the keys or decryption to touch easily intercepted and rowhammered RAM.
Good. I like the idea of a secure enclave that I own and control when it's in my computer but in practice almost all of them are deployed in a user-hostile way to the benefit of shareholders, to the point that burning the whole idea down would improve society. Imagine if every ROM and piece of CPU microcode was a lot more transparent.
These things are often used because of contractual requirements. Mainstream media including video games are often contractually protected: you must not let it run/play on any device without sufficient hardware protections. So vendors have to include these protection systems even if they don't want to. If the systems were useless, this might end.
More recently, TPM and the systems surrounding it are being effectively used for attestation of the entire OS and driver stack at boot time, from UEFI up to a running OS. DRM sucks, but I do appreciate having some degree of hardware-level defense against rootkits or other advanced malware.
Practically though those systems seem to be pretty weak and are always getting broken, the TPM itself is another place where malware can hide, it's not clear to me that the benefits could ever outweigh the risks.
From the article, "The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel". "Low-complexity" requires physical access and an OS compromise? What the hell would high complexity be?
Focused Ion Beam workstation, decap the relevant IC & probe its internal connections directly. If protected by a mesh, also use the FIB to deposit extra metal to bypass the mesh to make the probe holes. If protected by light sensors, also bypass them. Create glitches by shining highly focused lasers onto specific transistors at specific times. Etc. The sorts of attacks Christopher Tarnovsky did on a bunch of TPMs & talked about at DEFCON.
I was looking for the old CCC talk about this stuff, but I ended up finding out about a project called RayV Lite which seeks to democratize this hardware
It is also true that making attackers spend time and resources has value. Just because you're trapped in a Red Queen race doesn't mean you should stop running
But way too often getting into the TPM on one machine leaks secrets that enable a global compromise. In the case of media piracy, for instance, DRM might inconvenience millions of people but it takes just one person to crack it, either head on or through the analog hole and then the files are on BitTorrent.
I think it provides a false sense of security in practice. You end up relying on security methods that dont work against adversaries above a level of initial investment.
I would think that having TEE means that you can run secure software on unsecured hardware, if that's not the case, then what's the point of TEE in the first place?
If I have my hardware under lock and key in my house, this lets me only trust the CPU vendor and not the software stack running on my computer when I try to verify that it is running exactly the workload that I intended. With a third party, if I trust you to not tamper with your hardware, this let's both of us remove the people who wrote the hypervisor from our trust base.
As I understand, the purpose of "secure enclaves" is to enforce DRM, copyright protection, anti-debugging measures, so breaking them is a good thing.
Well, also used for confidential computing and other stuff that you might benefit from too, so not just to gatekeep stuff. Depending on what you use it for (or rather, what your computer is using it for), you might not want it broken in all cases.
With that said, I'd rather see it broken than not, considering it's mostly used for negative stuff, and it isn't open enough to evaluate if it actually is secure enough.
The purpose of secure enclave is to prevent administrator from accessing the data. I don't want anyone doing "confidential computing" on my devices. I am the person which can be trusted so there is no need to hide the encryption keys from me.
11 replies →
With the rise of "passkeys" that every single website is cramming down our throats now, aren't those also stored in the secure enclave? AKA the keys to your entire encrypted data and digitized life?
I look forward to recordings of the scam calls, where they ask the victim to "place a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into".
1 reply →
Not your keys - not your computer
Imagine this being voted down on hacker news.
They also store passkeys for logging into websites with biometrics and PIN.
So do hard drives.
3 replies →
It’s also where private keys for your device to secure your data live, so it’s like nuclear power, you can make a bomb or a clean power plant.
No, these should exist in the TPM and highly volatile memory like CPU cache. This including the decryption code. This can be achieved using mechanisms similar to what Coreboot does before RAM is initialized.
No need for the keys or decryption to touch easily intercepted and rowhammered RAM.
1 reply →
Why the keys for my device should be not accessible for me? The purpose of secure enclave is to prevent administrator from accessing the data.
1 reply →
the private keys to secure my data live in my brain
Good. I like the idea of a secure enclave that I own and control when it's in my computer but in practice almost all of them are deployed in a user-hostile way to the benefit of shareholders, to the point that burning the whole idea down would improve society. Imagine if every ROM and piece of CPU microcode was a lot more transparent.
These things are often used because of contractual requirements. Mainstream media including video games are often contractually protected: you must not let it run/play on any device without sufficient hardware protections. So vendors have to include these protection systems even if they don't want to. If the systems were useless, this might end.
More recently, TPM and the systems surrounding it are being effectively used for attestation of the entire OS and driver stack at boot time, from UEFI up to a running OS. DRM sucks, but I do appreciate having some degree of hardware-level defense against rootkits or other advanced malware.
Practically though those systems seem to be pretty weak and are always getting broken, the TPM itself is another place where malware can hide, it's not clear to me that the benefits could ever outweigh the risks.
1 reply →
From the article, "The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel". "Low-complexity" requires physical access and an OS compromise? What the hell would high complexity be?
Focused Ion Beam workstation, decap the relevant IC & probe its internal connections directly. If protected by a mesh, also use the FIB to deposit extra metal to bypass the mesh to make the probe holes. If protected by light sensors, also bypass them. Create glitches by shining highly focused lasers onto specific transistors at specific times. Etc. The sorts of attacks Christopher Tarnovsky did on a bunch of TPMs & talked about at DEFCON.
I was looking for the old CCC talk about this stuff, but I ended up finding out about a project called RayV Lite which seeks to democratize this hardware
https://www.netspi.com/blog/executive-blog/hardware-and-embe...
https://github.com/ProjectLOREM/RayVLite
3 replies →
I thought the point of secure enclaves is to protect against attacks by someone with access to the hardware.
Therefore requiring physical assess is still low complexity in context.
Isn't one of the point of a secure enclave that it does not need to trust the rest of the computer it is running on?
And the images also show that "small piece of hardware" is connected to lots of chonky ribbon connectors that make IDE cables look slim.
A wise elder once told me, "There are no secrets in silicon." (e.g. https://www.sciencedirect.com/science/article/abs/pii/S00262...)
If an attacker with time and resources has physical access, you are doomed.
It is also true that making attackers spend time and resources has value. Just because you're trapped in a Red Queen race doesn't mean you should stop running
But way too often getting into the TPM on one machine leaks secrets that enable a global compromise. In the case of media piracy, for instance, DRM might inconvenience millions of people but it takes just one person to crack it, either head on or through the analog hole and then the files are on BitTorrent.
It works in practice because most don't have enough time, physical access, and electron microscopes.
I think it provides a false sense of security in practice. You end up relying on security methods that dont work against adversaries above a level of initial investment.
"All three chipmakers exclude physical attacks from threat models for their TEEs."
So, working as intended.
I would think that having TEE means that you can run secure software on unsecured hardware, if that's not the case, then what's the point of TEE in the first place?
If I have my hardware under lock and key in my house, this lets me only trust the CPU vendor and not the software stack running on my computer when I try to verify that it is running exactly the workload that I intended. With a third party, if I trust you to not tamper with your hardware, this let's both of us remove the people who wrote the hypervisor from our trust base.
Amazon Nitro Enclaves not effected
IMO Amazon is the obvious choice for TEE because they make billions selling isolated compute
If you built a product on Intel or AMD and need to pivot do take a look at AWS Nitro Enclaves
I built up a small stack for Nitro: https://lock.host/ has all the links
MIT everything, dev-first focus
AWS will tell you to use AWS KMS to manage enclave keys
AWS KMS is ok if you are ok with AWS root account being able to get to keys
If you want to lock your TEE keys so even root cannot access I have something i the works for this
Write to: hello@lock.host if you want to discuss
Nitro Enclaves also require you to trust Amazon. No thanks, I'll take the hardware based solution.
why wouldn't it be effected?
Because AWS does not sell the Nitro TEE hardware
And so there is no case where you find a Nitro TEE online and the owner is not AWS
And it is practically impossible to break into AWS and perform this attack
The trust model of TEE is always: you trust the manufacturer
Intel and AMD broke this because now they say: you also trust where the TEE is installed
AWS = you trust the manufacturer = full story