Comment by api
3 years ago
Here's the counter-argument that I've seen in cryptography circles:
Dual EC, a PRNG built on an asymmetric crypto template, was kind of a ham fisted and obvious NOBUS back door. The math behind it made such a backdoor entirely plausible.
That's less obvious in other cases.
Take the NIST ECC curves. If they're backdoored it means the NSA knows something about ECC we don't know and haven't discovered in the 20+ years since those curves were developed. It also means the NSA was able to search all ECC curves to find vulnerable curves using 1990s technology. Multiple cryptographers have argued that if this is true we should really consider leaving ECC altogether. It means a significant proportion of ECC curves may be problematic. It means for all we know Curve25519 is a vulnerable curve given the fact that this hypothetical vulnerability is based on math we don't understand.
The same argument could apply to Speck:
https://en.wikipedia.org/wiki/Speck_(cipher)
Speck is incredibly simple with very few places a "mystery constant" or other back door could be hidden. If Speck is backdoored it means the NSA knows something about ARX constructions that we don't know, and we have no idea whether this mystery math also applies to ChaCha or Blake or any of the other popular ARX construction gaining so much usage right now. That means if we (hypothetically) knew for a fact that Speck was backdoored but not how it's backdoored it might make sense to move away from ARX ciphers entirely. It might mean many or all of them are not as secure as we think.
SM2 (Chinese), GOST (Russian) and NIST P (American) parameters are "you'll just have to straight up assume these are something up our sleeve numbers".
ECGDSA/brainpool (German) and ECKCDSA (Korean) standards make an attempt to explain how they chose recommended parameters but at least for brainpool parameters, the justifications fall short.
The DiSSECT[1] project recently published this year is an excellent approach to estimating whether parameters selected (often without justification) are suspicious. GOST parameters were found to be particularly suspicious.
I wonder if a similar project could be viable for assessing parameters of other types of cryptographic algorithms e.g. Rijndael S-box vs. SM4 S-box selection?
[1] https://dissect.crocs.fi.muni.cz/
Interesting link, and yes it does look like the GOST curves are really suspect. I didn't see a graph for the NIST curves and they do not appear to have called them out.
There's a big difference though with the GOST curves. They were generated in what seems to be a 100% opaque manner, meaning they could have been back-calculated from something.
The NIST curves were generated in a way that was verifiably pseudorandom (generation involved a hash of a constant) but the constant was not explained. This makes it effectively impossible to straight-up back-calculate these curves from something else. NIST/NSA would have had to brute force search for parameters giving rise to breakable curves, which is the basis of the reasoning I've seen by cryptographers I quoted above.
Note that the cryptographers I've seen make this argument aren't arguing that the NIST curves could not be suspect. What they're arguing is that if they are in fact vulnerable and were found by brute force search using 90s computers, all of elliptic curve cryptography may be suspect. If we (hypothetically) knew for a fact they were vulnerable but did not know the vulnerability, we'd know that some troubling percentage of ECC curves are vulnerable to something we don't know and would have no way of checking other curves. We'd also have no way of knowing if other ECC constructions like Edwards curves or Koblitz curves are more or less vulnerable.
So the argument is: either the NIST curves are likely okay, or maybe don't use ECC at all.
Bruce Schneier was for a time a proponent of going back to RSA and classical DH but with large (4096+ bit) keys for this reason. RSA has some implementation gotchas but the math is better understood than ECC. Not sure if he still advocates this.
Personally I think the most likely origin of the NIST constants was /dev/urandom. Remember that these were generated back in the 1990s before things like curve rigidity was a popular topic of discussion in cryptography circles. The goal was to get working curves with some desirable properties and that's about it.
That’s a great project, thank you for the link. Take my upvote stranger.
Regarding Simon and Speck: one simple answer is that the complicated attacks may exist and simple attacks certainly exist for smaller block and smaller key sizes.
However, it’s really not necessary to have a backdoor in ARX designs directly when they’re using key sizes such as 64, 72, 96, 128, 144, 192 or 256 bits with block sizes of 32, 48, 64, 96 or 128 bits. Especially so if quantum computers arrive while these ciphers are still deployed. Their largest block sizes are the smallest available for other block ciphers. The three smallest block sizes listed are laughable.
They have larger key sizes specified on the upper end. Consider that if the smaller keys are “good enough for NSA” - it will be used and exploited in practice. Not all bits are equal either. Simon’s or Spec’s 128 bits are doubtfully as strong as AES’s 128 bits, certainly with half the bits for the block size. It also doesn’t inspire confidence that AES had rounds removed and that the AES 256 block size is… 128 bits. Suite A cryptography probably doesn’t include a lot of 32 bit block sizes. Indeed BATON supposedly bottoms out at 96 bits. One block size for me, another for thee?
In a conversation with an author of Speck at FSE 2015, he stated that for some systems only a few minutes of confidentiality was really required. This was said openly!
This is consistent in my view with NSA again intentionally pushing crypto that can be broken in certain conditions to their benefit. This can probably be practically exploited though brute force with their computational resources.
Many symmetric cryptographers literally laugh at the NSA designs and at their attempts at papers justifying their designs.
Regarding NIST curves, the safe curves project shows that implementing them safely is difficult. That doesn’t seem like an accident to me, but perhaps I am too cynical? Side channels are probably enough for targeted breaks. NIST standardization of ECC designs don’t need to be exploited in ways that cryptographers respect - it just needs to work for NSA’s needs.