Comment by colmmacc
1 day ago
Hybrid schemes give you improved security against algorithmic flaws. If either algorithm being used is broken, the other gives you resilience. But hybrid schemes also double (or more) your exposure to ordinary implementation bugs and side-channels.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
Unless the implementation bug is severe enough to give RCE, memory dumping, or similar, I don't see how a bug in the MLKEM implementation (for example) would be able to leak the x25519 secret, even with sidechannels. A memory-safe impl would almost guarantee you don't have any bugs of the relevant classes (I know memory-safe != sidechannel-safe, but I don't see how sidechannels would be relevant). You still need to break need both to break the whole scheme.
I've rewritten some PQ implementations that had RCEs and memory disclosure vulnerabilities in them. No shade, but those implementations were from scientists who don't typically build production systems. As an industry, we're past this phase. Side-channels more commonly reveal plaintext than key material, but that shouldn't be fatal in the case of hybrid key agreement.
Based on what we've seen so far in industry research, I'd guess that enabling Denial of Service is the most common kind of issue.
I always wondered about this claim.
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
Here's we're talking about hybrid key-agreement. It's more like you agree secret A with a peer using the magic of Diffie-Helman, separately you make up secret B and encapsulate (which is basically a form of asymmetric encryption) that using a PQ algorithm and then send that on, and then derive C by mixing A and B. You're not actually encrypting something twice.
Some government and military standards do call for multiple layers of encryption when handling data, but it's just that multiple layers. You can't ever really make that kind of encryption weaker by adding a new "outer" layer. But you can make encryption weaker if you add a new "inner" layer that handles the plaintext. Side-channels in that inner layer can persist even through multiple layers of encryption.
This is true, but there is a subtle point that key K1 used for the classical algorithm must be statistically independent of key K2.
If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)
I think the answer is either very simple, or impossible to give without details.
If I recall my crypto classes and definitions correctly, if you have a perfect encryption X, a C = X(K, P) has zero information about P unless you know K. Thus, once X is applied, Y is not relevant anymore.
Once you have non-perfect encryptions, it depends on X and Y. Why shouldn't a structure in some post-quantum algorithm give you information about, say, the cycle length of an underlying modular logarithm like RSA? This information in turn could shave fractions of bits off of the key length of the underlying algorithm. These could be the bits that make it feasible to brute-force. Or they could be just another step.
On the other hand, proving that this is impossible is ... would you think that a silly sequence about rabbits would be related to a ratio well-known in art? There are such crazy connections in math. Proving that something cannot possibly connected is the most craziest thing ever.
But that's the thing about crypto: It has to last 50 - 100 years. RSA is on a trajectory out. It had a good run. Now we have new algorithms with new drawbacks.
What kinds of side channels are you thinking of? Given the key exchanges have a straightforward sha256/sha512 combiner, it would be surprising that a flaw in one of the schemes would give a real vulnerability?
I could see it being more of a problem for signing.
Yeah, key agreement in the context of SSH is quite forgiving of timing side channels as SSH uses ephemeral keys. There's no prospect of repeatedly re-doing the key agreement to gather more statistics on the counterparty's timing.
NSA recommends the rule-of-two, I think even before quantum resistant algorithms:
https://en.wikipedia.org/wiki/Multiple_encryption?utm_source...
The rest of the article has some stuff on what can go wrong if the implementations aren't truly independent.
So you are OK with having your data suddenly unencrypted at some point in the not-so-distant future?
It's a trade-off, yes, but that doesn't make it useless.
>not-so-distant future
aside the marketing bluff, quantum computing is nowhere near close
Nowhere near close, but getting every day closer. And you should factor in for how long secrets need to last.
1 reply →
Are we guaranteed to approach it at a constant velocity? I personally think it unwise to place my security on that bet.