← Back to context

Comment by jddj

1 day ago

I always wondered about this claim.

If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?

Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?

Here's we're talking about hybrid key-agreement. It's more like you agree secret A with a peer using the magic of Diffie-Helman, separately you make up secret B and encapsulate (which is basically a form of asymmetric encryption) that using a PQ algorithm and then send that on, and then derive C by mixing A and B. You're not actually encrypting something twice.

Some government and military standards do call for multiple layers of encryption when handling data, but it's just that multiple layers. You can't ever really make that kind of encryption weaker by adding a new "outer" layer. But you can make encryption weaker if you add a new "inner" layer that handles the plaintext. Side-channels in that inner layer can persist even through multiple layers of encryption.

This is true, but there is a subtle point that key K1 used for the classical algorithm must be statistically independent of key K2.

If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)

I think the answer is either very simple, or impossible to give without details.

If I recall my crypto classes and definitions correctly, if you have a perfect encryption X, a C = X(K, P) has zero information about P unless you know K. Thus, once X is applied, Y is not relevant anymore.

Once you have non-perfect encryptions, it depends on X and Y. Why shouldn't a structure in some post-quantum algorithm give you information about, say, the cycle length of an underlying modular logarithm like RSA? This information in turn could shave fractions of bits off of the key length of the underlying algorithm. These could be the bits that make it feasible to brute-force. Or they could be just another step.

On the other hand, proving that this is impossible is ... would you think that a silly sequence about rabbits would be related to a ratio well-known in art? There are such crazy connections in math. Proving that something cannot possibly connected is the most craziest thing ever.

But that's the thing about crypto: It has to last 50 - 100 years. RSA is on a trajectory out. It had a good run. Now we have new algorithms with new drawbacks.