← Back to context

Comment by Foxboron

4 days ago

> See for example the many problems of NIST P-224/P-256/P-384 ECC curves

What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?

As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.

On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.

This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.

  • I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.

    In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.

    Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.

    • I agree that Curve25519 and other "safer" algorithms are far from immune to side channel attacks in their implementation. For example, [1] is a single trace EM side channel key recovery attack against Curve25519 implemented in MbedTLS on an ARM Cortex-M4. This implementation had the benefit of a constant-time Montgomery ladder algorithm that NIST P curve implementations have traditionally not had a similar approach for, but nonetheless failed due to a conditional swap instruction that leaked secret state via EM.

      The question is generally, could a standard in 2025 build upon decades of research and implementation failures to specify side channel resistant algorithms to address conditional jumps, processor optimisations for math functions, etc which might leak secret state via timing, power or EM signals. See for example section VI of [1] which proposed a new side channel countermeasure that ended up being implemented in MbedTLS to mitigate the conditional swap instruction leak. Could such countermeasures be added to the standard in the first instance, rather than left to implementers to figure out based on their review of IACR papers?

      One could argue that standards are simply following interests of standards proposers and organisations who might not care about cryptography implementations on smart cards, TPMs, etc, or side channel attacks between different containers on the same host. Instead, perhaps standards proposers and organisations only care about side channel resistance across remote networks with high noise floors for timing signals, where attacks such as [2] (300ns timing signal) are not considered feasible. If this is the case, I would argue that the standards should still state their security model more clearly, for example:

      * Is the standard assuming the implementation has a noise floor of 300ns for timing signals, 1ms, etc? Are there any particular cryptographic primitives that implementers must use to avoid particular types of side channel attack (particularly timing)?

      * Implementation fingerprinting resistance/avoidance: how many choices can an implementation make that may allow a cryptosystem party to be deanonymised by the specific version of a crypto library in use?[3] Does the standard provide any guarantee for fingerprinting resistance/avoidance?

      [1] Template Attacks against ECC: practical implementation against Curve25519, https://cea.hal.science/cea-03157323/document

      [2] CVE-2024-13176 openssl Timing side-channel in ECDSA signature computation, https://openssl-library.org/news/vulnerabilities/index.html#...

      [3] Table 2, pyecsca: Reverse engineering black-box ellipticcurve cryptography via side-channel analysis, https://tches.iacr.org/index.php/TCHES/article/view/11796/11...

  • > As I understand it, a big issue is that they are really hard to implement correctly.

    Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.

    I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.

    • I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.

      That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".

It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.

Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.

With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that

  • It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.

    When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?

    • yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels — which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.

      The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc

      5 replies →

  • The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.

    • That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.

      17 replies →

    • The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.

    • Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).

They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.

in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:

  We’re writing a document “Security dangers of the NIST curves”
  Focus on the prime-field NIST curves
  DLP news relevant to these curves? No
  DLP on these curves seems really hard
  So what’s the problem?
  Answer: If you implement the NIST curves, chances are you’re doing it wrong
  Your code produces incorrect results for some rare curve points
  Your code leaks secret data when the input isn’t a curve point
  Your code leaks secret data through branch timing
  Your code leaks secret data through cache timing
  Even more trouble in smart cards: power, EM, etc.
  Theoretically possible to do it right, but very hard
  Can anyone show us software for the NIST curves done right?

As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.

  • He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.

    Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.

    But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.

    The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.

  • Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.