← Back to context

Comment by dhx

3 years ago

SM2 (Chinese), GOST (Russian) and NIST P (American) parameters are "you'll just have to straight up assume these are something up our sleeve numbers".

ECGDSA/brainpool (German) and ECKCDSA (Korean) standards make an attempt to explain how they chose recommended parameters but at least for brainpool parameters, the justifications fall short.

The DiSSECT[1] project recently published this year is an excellent approach to estimating whether parameters selected (often without justification) are suspicious. GOST parameters were found to be particularly suspicious.

I wonder if a similar project could be viable for assessing parameters of other types of cryptographic algorithms e.g. Rijndael S-box vs. SM4 S-box selection?

[1] https://dissect.crocs.fi.muni.cz/

Interesting link, and yes it does look like the GOST curves are really suspect. I didn't see a graph for the NIST curves and they do not appear to have called them out.

There's a big difference though with the GOST curves. They were generated in what seems to be a 100% opaque manner, meaning they could have been back-calculated from something.

The NIST curves were generated in a way that was verifiably pseudorandom (generation involved a hash of a constant) but the constant was not explained. This makes it effectively impossible to straight-up back-calculate these curves from something else. NIST/NSA would have had to brute force search for parameters giving rise to breakable curves, which is the basis of the reasoning I've seen by cryptographers I quoted above.

Note that the cryptographers I've seen make this argument aren't arguing that the NIST curves could not be suspect. What they're arguing is that if they are in fact vulnerable and were found by brute force search using 90s computers, all of elliptic curve cryptography may be suspect. If we (hypothetically) knew for a fact they were vulnerable but did not know the vulnerability, we'd know that some troubling percentage of ECC curves are vulnerable to something we don't know and would have no way of checking other curves. We'd also have no way of knowing if other ECC constructions like Edwards curves or Koblitz curves are more or less vulnerable.

So the argument is: either the NIST curves are likely okay, or maybe don't use ECC at all.

Bruce Schneier was for a time a proponent of going back to RSA and classical DH but with large (4096+ bit) keys for this reason. RSA has some implementation gotchas but the math is better understood than ECC. Not sure if he still advocates this.

Personally I think the most likely origin of the NIST constants was /dev/urandom. Remember that these were generated back in the 1990s before things like curve rigidity was a popular topic of discussion in cryptography circles. The goal was to get working curves with some desirable properties and that's about it.