An important detail you really want to understand before reading this is that NIST (and NSA) didn't come up with these algorithms; they refereed a competition, in which most of the analysis was done by competitors and other academics. The Kyber team was Roberto Avanzi, Joppe Bos, Léo Ducas, Eike Kiltz, Tancrède Lepoint, Vadim Lyubashevsky, John M. Schanck, Gregor Seiler, Damien Stehlé, and also Peter Schwabe, a collaborator of Bernstein's.
Absolutely, but NIST ultimately choose the winners, giving them the option to pick (non-obviously) weak/weaker algorithms. Historically only the winners are adopted. Look at the AES competition - how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael by most accounts?
> Historically only the winners are adopted. Look at the AES competition
Often, yes. But also consider the SHA-3 competition.
BLAKE2 seems more widely used than what was chosen for SHA-3 (Keccak). What was submitted for the SHA-3 competition was BLAKE1 (it didn't have a number back then but I think this is clearer) so it's not like NIST said that Keccak is better than BLAKE2, they only said it's better than BLAKE1 (per their requirements, which are unlikely to align with your requirements because of the heavy weighing of speed-in-hardware), but still this is an example of a widely used algorithm that is not standardized.
> how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael
The goal of an encryption algorithm is not only to be secure. Sure, that has to be a given: nobody is going to use a broken algorithm when given a choice. But when you have two secure options, the more efficient one is the one to choose. You could use a 32k RSA key just to be sure, or a 4k RSA key which (to the best of my knowledge) everyone considers safe until quantum. (After quantum, you need something like a 1TB key, as djb humorously proposed.)
Wikipedia article on Serpent: "The 32 rounds mean that Serpent has a higher security margin than Rijndael; however, Rijndael with 10 rounds is faster and easier to implement for small blocks."
I don't know that nobody talks about Serpent solely because it was not chosen as winner. It may just be that Rijndael with 256-bit keys is universally considered secure and is more efficient at doing its job.
I fully admit to having a weak spot for Serpent - it is self-bitslicing (see the submission package or the linux kernel tree), which in hindsight makes constant time software easier to write, and it was faster in hardware even when measured at the time, which is where we have ended up putting AES anyway (e.g. AES-NI etc).
BUT. On security margins, you could argue the Serpent designers were too conservative: https://eprint.iacr.org/2019/1492
It is also true that cryptanalytic attacks appear to fare slightly better against AES than Serpent. What does this mean? A brute force attack has the same number of operations as the claimed security level, say, 2^128 for 128-bit. An attack is something better than this: fewer operations. All of the attacks we know about achieve slightly less than this security level - which is nonetheless still impossible to do - but that comes at a cost: they need an infeasible amount of memory. In terms of numbers: 9000 TB to reduce 2^128 to 2^126 against full-round AES according to a quick check of wikipedia. For reference, the lightweight crypto competition considered 2^112 to be sufficient margin. 2^126 is still impossible.
In practice, the difference between Serpent and AES in terms of cryptanalytic security is meaningless. It is not an example of NIST picking a weaker algorithm deliberately, or I would argue, even unintentionally. It (AES) was faster when implemented in software for the 32-bit world that seemed to be the PC market at the time.
Correct me if I'm wrong, everything is also being done out in the open for everyone to see. The NIST aren't using some secret analysis to make any recommendations.
Teams of cryptographers submit several proposals (and break each other's proposals). These people are well respected, largely independent, and assumed honest. Some of the mailing lists provided by NIST where cryptographers collaborated to review each other's work are public
NIST may or may not consort with your friendly local neighborhood NSA people, who are bright and talented contributors in their own right. That's simply in addition to reading the same mailing lists
At the end, NIST gets to pick a winner and explain their reasonning. What influenced the decision is surely a combination of things, some of which may be internal or private discussions
> everything is also being done out in the open for everyone to see
Well, everything apart from the secret stuff:
"I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.
That lawsuit has been gradually revealing secret NIST documents, shedding some light on what was actually going on behind the scenes, including much heavier NSA involvement than indicated by NIST's public narrative"
There is a final standardization step where NIST selects constants, and this is done without always consulting with the research team. Presumably, these are usually random, but the ones chosen for the Dual-EC DRBG algorithm seem to have been compromised. SHA-3 also had some suspicious constants/padding, but that wasn't shown to be vulnerable yet.
The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position. NIST is staffed by normal people who probably view aggressively worded requests for clarification in the same way that most of us have probably fielded aggressively worded bug reports.
Adding accusatory hyperbolic statements like: "You exposed three years of user data to attackers by telling people to use Kyber starting when your patent license activates in 2024, rather than telling people to use NTRU starting in 2021!" doesn't help. Besides the fact that nobody is deploying standalone PQ for some time, there were several alternatives that NIST could have suggested in 2021. How about SIKE? That one was pretty nice until it was broken last year.
Unfortunately, NIST doesn't have a sterling reputation in this area, but if we're going to cast shade on the algorithm and process, a succinct breakdown of why, along with a smoking gun or two would be great. Pages and pages of email analysis, comparison to (only) one other submission, and accusations that everyone is just stalling so data can be vacuumed up because it is completely unprotected makes it harder to take seriously. If Kyber-512 is actually this risky, then it deserves to be communicated clearly.
This is 100% in line my reading of the submission.
Also noting that the page contains seventeen thousand words. That many words of harry potter take an average person 70 minutes to read. This text is no harry potter: it's chock-full of numbers, things to consider, and words and phrasings to weigh (like when quoting NIST), so you're not going to read it as fast as an average book, if you know enough about PQC to understand the text in the first place.
I even got nerdsniped near the beginning into clicking on "That lawsuit has been gradually <revealing> secret NIST documents, shedding some light on what was actually going on behind the scenes". That page (linked by the word <revealing>) is another 54000 words. Unaware, due to not having a scroll bar on mobile (my fault, I know), I started skimming it linearly to see what those revelations might be. Nothing really materialized. At some point I caught on that I seemed to have enrolled for a PhD research project and closed that tab to continue reading the original page...
Most HN readers, who are often smart and highly technical but in various fields, cannot reasonably weigh and interpret the techobabble evidence for "nist=bad". Being in an adjacent field, I would guess that I understand more than the average reader, but still don't feel qualified to judge this material without really giving it a thorough read. The page reasonably gives context and explains acronyms, but there's just so much of it that I can't imagine anyone who doesn't already know would want to bother with it. Not everyone understanding a submission is okay, but this is about accusations, and that makes me feel like it is not a good submission for HN.
HN readers that don't want to read the piece in full can take solace in that PQC has not been proven viable. Thus, what algorithms we should use to protect ourselves once what we thought was intractable becomes tractable may be a moot point. Shor's algorithm is capable of factoring 21 into 7 x 3. That's a long way off from factoring the thousands of digits-long numbers used for modern cryptography.
Edit: Just realized the author is djb, Daniel Bernstein, which I guess is semi-ironic for me because I was recently praising him on HN for an old, well-read blog post on ipv6. Thus, I guess I may take back a bit of what I said below, or least perhaps it would be better to say that I can better understand the adversarial tone given djb's history with NIST recommendations (more info at https://en.wikipedia.org/wiki/Daniel_J._Bernstein#Cryptograp...).
> The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position.
Couldn't agree with this more. I don't like to harp on form over substance, but in this case the form of this blog post was so bad I had difficulty evaluating whether the substance was worthwhile. I'm not in the field of cryptography, so I'm not qualified to assess on the merits, but my thoughts reading this were:
1. All the unnecessary snark and disparagement made me extremely wary of the message. It seemed like he was making good points, but the overall tone was similar to those YouTube "WhaT ThE ElITe DoN'T WanT YoU TO KnoW!!" videos. Frankly, the author just sounds like kind of an asshole, even if he is right.
2. Did anyone actually read this whole thing?? I know people love to harp on "the Internet has killed our attention spans", and that may be true, but the flip side is we're bombarded with so much info now that I take a very judicious approach to where I'll spend my time. On that point, if you're writing a blog post, the relevant details and "executive summary" if you will should be in the first couple paragraphs, then put the meandering, wandering diary after. Don't expect a full read if important tidbits are hidden like Where's Waldo in your meandering diary.
I read the whole thing because of who the author was.
The executive summary is above the fold:
Take a deep breath and relax. When cryptographers are analyzing the security of cryptographic systems, of course they don't make stupid mistakes such as multiplying numbers that should have been added.
If such an error somehow managed to appear, of course it would immediately be caught by the robust procedures that cryptographers follow to thoroughly review security analyses.
Furthermore, in the context of standardization processes such as the NIST Post-Quantum Cryptography Standardization Project (NISTPQC), of course the review procedures are even more stringent.
The only way for the security claims for modern cryptographic standards to turn out to fail would be because of some unpredictable new discovery revolutionizing the field.
Oops, wait, maybe not. In 2022, NIST announced plans to standardize a particular cryptosystem, Kyber-512. As justification, NIST issued claims regarding the security level of Kyber-512. In 2023, NIST issued a draft standard for Kyber-512.
NIST's underlying calculation of the security level was a severe and indefensible miscalculation. NIST's primary error is exposed in this blog post, and boils down to nonsensically multiplying two costs that should have been added.
How did such a serious error slip past NIST's review process? Do we dismiss this as an isolated incident? Or do we conclude that something is fundamentally broken in the procedures that NIST is following?
> I know people love to harp on "the Internet has killed our attention spans"
Not just that. Give your parent or grandparent a 75-page booklet to read, full of accusations and snark, and let's say it's about something they care about and actually impacts their lives (maybe a local government agency, idk). What are the odds they are going to read that A-Z versus waiting for a summary or call-to-action to be put out? The latter can be expected to happen if there is actually something worthwhile in there.
This is objectively too long for casual reading, nothing to do with anyone's attention span.
(The 75-page estimate is based on: (1) a proficient reader doing about a page per minute in most books that I know of, so pages==minutes; (2) the submission being 17.6k words; (3) average reading speed is ~250 wpm, resulting in 17.6e3/250=70 minutes; (4) this is not an easy text, it has lots of acronyms and numbers, so conservatively pad to 75.)
Even worse, I expected to find a part when he reports it and includes the responses/follow-up from that... But this is the first time it's published a far as I understand? Did I miss it in the wall of text? Or is it really a huge initial writeup that may end up with someone responding "oh, we did mess up, didn't we? Let's think how to deal with that."
That's pretty selective quoting of the issues. He even says himself that the waiting for the patent is one of the minor issues.
The many questions he asks is why did they repeatedly change the evaluation criteria after the fact, presented results in a misleading ways, and made basic calculation errors (remember these guys are experts). All these in favor of one algorithm.
Now to someone like me this points to the fact that they really wanted that algorithm to be the standard. If we add to that the fact that there was significantly more NSA involvement than indicated and that they did their best to hide this, leads me to be extremely skeptical of the standard.
> If Kyber-512 is actually this risky, then it deserves to be communicated clearly.
The statement djb seems to be making: It is not known if Kyber-512 is as cryptographically strong as AES-128 by the definitions provided by NIST.
This is an issue because these algorithms will be embedded within hardware soon.
> Besides the fact that nobody is deploying standalone PQ for some time
Now that an implementation has been chosen to be standardized, hardware vendors are likely to start designing blocks that can more efficiently compute the FIPS 203 standard (if they haven't already designed a few to begin with).
Given that the standard's expected publication is in 2024, and the 1-2 year review timeline for NIST CMVP review on FIPS modules, I wouldn't be surprised to see a FIPS 140-3 Hardware Module with ML-KEM (Kyber-etc.) by mid 2026.
> a succinct breakdown of why
The issue seems to be his statement from [1]: "However, NIST didn't give any clear end-to-end statements that Kyber-512 has N bits of security margin in scenario X for clearly specified (N,X)."
djb succinctly outlines the "scenario X" he referred to in [2], in which he only needs a yes or no answer. He is literally asking the people who should know and be able to discuss the matter, who would have the technical background to discuss this matter. He had received no response, which is why he had posted [1].
NIST's reply in [3] is a dismissal of [1] without a discussion of the security itself. The frustrating part for me to read was the second paragraph: "The email you cited (https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...), speaks for itself. NIST continues to be interested in people's opinions on whether or not our current plan to standardize Kyber512 is a good one. While reviewers are free, as a fun exercise, to attempt to "disprove what NIST _appears_ to be claiming about the security margin," the results of this exercise would not be particularly useful to the standardization process. NIST's prior assertions and their interpretation are not relevant to the question of whether people believe that it is a good idea to standardize Kyber512."
If NIST views the reviewers' claims about security to be "not particularly useful to the standardization process," (and remember: the reviewers are themselves cryptographers) then why should the public trust the standard at all?
> a smoking gun or two would be great
There wouldn't be a smoking gun because the lack of clarification is the issue at hand. If they could explain how they calculated the security strength of Kyber-512, then this would be a different issue.
The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements, so clarification or justification seems necessary.
> The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements
More to the point, (at least to my understanding) it puts it on par with another contender that was rejected from the NIST competition for being too weak a security construct.
If TFA were by a nobody I might agree, but TFA is by DJB and/or Tanja Lange, and they're not nobodies. These things need to be at least somewhat adversarial partly because that's what it takes to do cryptanalysis, and partly because of past shenanigans. It goes with the territory and the politics. It's unavoidable.
That's more of a diary than an article -- jargony, disorganized, running in circles, very hard to follow. But the information might be important regardless. There's a strong implication that NIST with help of the NSA intentionally standardized on a weak algorithm.
We all know that's possible.
But can someone who follows some of this stuff more closely explain what the play would be? I always assumed that weakening public cryptography in such a way is a risky bet, because you can't be sure that an attacker doesn't independently find out what you know. You can keep a secret backdoor key (that was the accusation when they released Dual_EC_DRBG), but you can't really hide mathematical results.
Why the overwhelming benefit of the doubt in an organization that has repeatedly failed expectations?
I don't understand why this is even a conversation.
We don't need them any more. Export restrictions are gone.
What we need is a consortium to capture the attention of the hardware vendors and limit NIST and the NSA to participant status.
Then if the government decides to adopt their backdoored standards, they're the only ones.
Because the NSA has equally well funded adversaries that would love to find a back door to the NIST standards the whole of the US government uses. Even if the highest levels of the military and government use secret squirrel super cryptography the rest is using NIST standards. It's all the boring parts of government that deposits paychecks and runs the badge readers to their offices.
> You're making an assumption that the NSA cares about the efficacy of cryptography for other people. Why would they care about that?
Hypothesis 1: because the NSA sees evidence that more efficient cryptographic algorithms are easier to crack for them.
To give some weak evidence for this: if you need brute force to crack the cipher (or hash function), a more efficient algorithm need less computation power to crack.
Hypothesis 2: A more efficient algorithm is likely to become applied in more areas than a less efficient one (think of smartcards or microcontrollers). So if the NSA finds a weakness or is capable of introducing a backdoor in it, it can decrypt a lot more data from more areas.
Certain types of attacks basically make it so you need to have a specific private key to act as a backdoor. That's the current guess on what may be happening with the NIST ECC curves.
If so, this can be effectively a US-only backdoor for a long, long time.
I don't believe that is anybody's guess on what may be happening with the NIST ECC curves. Ordinarily, when people on HN say things like this, they're confusing Dual EC, a public key random number generator, known to be backdoored, with the NIST curve standards.
NSA weakened DES from 64-bit keys to 56-bit keys. The idea was that they could be ahead in breaking it, and that by the time 56-bit keys were too weak in general then something else would replace DES. Risky? Yes, but it worked out, for some value of "worked out". So I wouldn't assume something like that wouldn't happen again.
They did that openly. What they did in secret was to harden it against an incredibly powerful attack (it's still a basis for block and hash cryptanalysis today) that nobody else knew about.
The general idea would be that they get a few years out of it before other nation/state factions discover it. The theory behind it is called “kleptography”, because the NSA is deluded enough to think that you can steal information “securely”.
It's all far too conspiratorial for me. Just show me the math as to why it's broken, I don't need a conspiratorial mind map drawing speculative lines between various topics. Do an appendix or two for that.
There's nothing conspiratorial about the post, why not read the article? The math error is described in line 2, the actual error about two screens down, highlighted in red.
> Discovering the secret workings of NISTPQC. I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.
As much as I generally loathe djb personally, professionally he will always have my support as he’s been consistently willing to take the federal government to task in court. It brings me great joy to see he’s still at it.
Might be because he’s a bit of a “Linus” in crypto with the same ego and temper.
However, the man has done so much to advance privacy and cryptography I think he’s earned the right to be a bit snippy, especially when he’s discussing something so complex 99% of the comments are “too long to read” and “I read it but I still don’t understand it”.
Notwithstanding DJB's importance to cryptography, and the fact that I'm ignorant of a large number of details here, there was a point where he lost a lot of credibility with me.
Specifically, when he gets to the graphs, he says "NIST chose to deemphasize the bandwidth graph by using thinner red bars for it." That is just not proven by his evidence, and there is a very plausible explanation for it. The graph that has the thinner bars is a bar chart that has more data points than the other graph. Open up your favorite charting application, and observe the difference in a graph that has 12 data points versus one with 9... of course the one with 12 data points has thinner lines! At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.
In the next bullet point, he complains that they're not using a log scale for the graph... where everything is in the same order of magnitude. That doesn't sound like a good use case for log scale, and I'm having a hard time trying to figure out why it might be justified in this case.
Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.
>At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.
Given the long and detailed history of various governments and government agencies purposefully attempting to limit the public from accessing strong cryptography, I tend to agree with the "assume malice by default" approach here. Assuming anything else, to me at least, seems pretty naive.
Eh, it goes both ways. Back in the 1970's and 1980's there was a whole lot of suspicion about changes that the NSA made to DES S-boxes with limited explanation- was it a backdoor in some way? Then in 1989 white hats "discovered" differential cryptography, and realized that the changes that were made to the algorithm actually protected it from a then-unknown (to the general public) cryptographic attack. Differential cryptography worked beautifully on some other popular cryptosystems of the era, e.g. the FEAL-4 cipher could be broken with just 8 plaintext examples, while DES offered protection up to 2^47 chosen plaintexts.
The actual way that the NSA had tried to limit DES was to cap its key length at 48 bits, figuring that their advantage in computing power would let them brute force it when no one else could. (NIST compromised between the NSA's desire for 48 and the rest of the world's desire for 64, which was why DES had the always bizarre 56 bit key.) So sometimes they strengthen it, sometimes they weaken it, and so I'm not sure it appropriate to presume malice.
There's a meaningful difference between assuming an actor is malicious or untrustworthy and going out of your way to provide the maximally malicious interpretation of each of their actions. As a matter of rhetoric, the latter tends to give the impression of a personal vendetta.
If you continue reading, you'll find that they aren't responding to requests for clarification on their hand-waving computations. Suspicion is definitely warranted.
> Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.
There isn't a lot of people in the world with the technical know-how for cryptography. It's clear that competitors in this space are going to be reviewing eachothers work.
I'm not sure N(IST)SA has any credibility left. Polularity of curve 25519 over their P curves is encouraging and it would be great to see the community continue this direction and largely ignore them going forward.
The government shouldn't be leading or deciding, it would be better organized around gathering current consensus and following when it comes to FIPS, regulation, etc.
The NIST standardization process, appears to have a grey area particularly around the selection of constants.
The skepticism around standardization, advocating instead for direct adoption from cryptographers, sheds light on potential shortcomings in the current system.
There is definitely a need for a more transparent or open scrutiny in algorithm standardization to ensure security objectives are met.
Related note: Government employees (including military, intel) are just people, and worse, bureaucrats. They aren't magical wizards who can all do amazing things with mathematics and witchcraft. If they were good at what they do, they wouldn't need ever increasing funding and projects to fix things.
Cryptanalysis and encryption are somewhat of an exception to this. There are some extremely smart people who work in these areas for the government, precisely because funding and application is on a different scale.
My takeaway (impression) from the DJB post is that the evaluation by the NISTPQC seems not to provide algorithms with a firm level of security. That the evaluation is not clear cut, and not provide a good, conservative lower bound for the security provided by the algorithms selected.
If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.
However, this man is one of the foremost cryptographers in the world, he has basically single-handedly killed US government crypto export restrictions back in the days, and (not least of all because of Snowden) we know that the NSA really is trying to sabotage cryptography.
Also, he basically founded the field of post-quantum cryptography.
Is NIST trying to derail his work by standardizing crappy algorithms with the help of the NSA? Who knows. But to me it does smell like that.
Bernstein has a history of being right, and NIST and the NSA have a history of sabotaging cryptographic standards (google Dual_EC_DRBG if you don't know the story).
This comment is factually incorrect on a number of levels.
1) single-handedly killed US government crypto export restrictions - Bernstein certainly litigated, but was not the sole actor in this fight. For example, Phil Zimmerman, the author of PGP, published the source code of PGP as a book to work around US export laws, which undoubtedly helped highlight the futility of labelling open source software as a munition: https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...
2) Bernstein "founded" the field of post quantum cryptography: Uh. Ok. That's not how academia works. Bernstein was certainly an organiser of the first international workshop on post quantum cryptography, but that's not the same as inventing a field. Many of the primitives that are now candidates were being published long before this, McEliece being one of the oldest, but even Atjai's lattice reductions go back to '97.
3) The dual_ec rng was backdoored (previously read was and is fishy, poor wording on my part), but nobody at the time wanted NIST to standardize it because it was a _poor PRNG anyway_: slow and unnecessarily complicated. Here is a patent from Scott Vanstone on using DUAL_EC for "key escrow" which is another way of saying "backdoor": https://patentimages.storage.googleapis.com/32/9b/73/fe5401e... - filed in 2006. In case you don't know Scott Vanstone, he's the founder of Certicom. So at least one person noticed. This was mentioned in a blog post as a result of the Snowden leaks working out how the backdoor happened: https://blog.0xbadc0de.be/archives/155
NSA have been caught in a poor attempt to sabotage a standard that nobody with half a brain would use. On the other hand NSA also designed SHA-2, which you are likely using right now, and I'm not aware of anyone with major concerns about it. When I say NSA designed it, I don't mean "input for a crypto competition" - a team from the NSA literally designed it and NIST standardized it, which is not the case for SHA-3, AES or the current PQC process.
DJB is a good cryptographer, better than me for sure. But he's not the only one - and some very smart, non-NSA, non-US-citizen cryptographers were involved in the design of Kyber, Dilithium, Falcon etc.
I had the same take on Dual EC prior to Snowden. The big revelation with Snowden wasn't NSA involvement in Dual EC, but rather that (1) NSA had intervened to get Dual EC defaulted-on in RSA's BSAFE library, which was in the late 1990s the commercial standard for public key crypto, and (2) that major vendors of networking equipment were --- in defiance of all reason --- using BSAFE rather than vetted open-source cryptography libraries.
DJB probably did invent the term "post-quantum cryptography". For whatever that's worth.
Bernstein is often right, despite the controversy around the Gimli permutation.
In this particular case it's worth noting that neither BSI (Germany) nor NLNCSA (The Netherlands) recommend Kyber.
Unfortunately, alternative algorithms are more difficult to work with due to their large key sizes among other factors, but it's a price worth paying. At Backbone we've opted not to go down the easy route.
> If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.
> However, this man is one of the foremost cryptographers in the world […]
It's possible to be both (not saying Bernstein is).
Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.
> Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.
Kaczynski dropped out of society to live in a cabin alone at 29. He delivered his first bomb at 35. I'm not sure this is a reasonable comparison to invoke in any way whatsoever.
When DJB starts posting about the downfall of modern society from his remote cabin in Montana, perhaps, but as far as I know he's still an active professor working from within the University system.
An interesting set of comments (by tptacek) from a thread in 2022 (I wonder if they still hold the same opinion in light of this latest post on NIST-PQC by djb):
> The point isn't that NIST is trustworthy. The point is that the PQC finalist teams are comprised of academic cryptographers from around the world with unimpeachable reputations, and it's ludicrous to suggest that NSA could have compromised them. The whole point of the competition structure is that you don't simply have to trust NIST; the competitors (and cryptographers who aren't even entrants in the contest) are peer reviewing each other, and NIST is refereeing.
> What Bernstein is counting on here is that his cheering section doesn't know the names of any cryptographers besides "djb", Bruce Schneier, and maybe, just maybe, Joan Daemen. If they knew anything about who the PQC team members were, they'd shoot milk out their nose at the suggestion that NSA had suborned backdoors from them. What's upsetting is that he knows this, and he knows you don't know this, and he's exploiting that.
---
> I spent almost 2 decades as a Daniel Bernstein ultra-fan --- he's a hometown hero, and also someone whose work was extremely important to me professionally in the 1990s, and, to me at least, he has always been kind and cheerful... I know what it's like to be in the situation of (a) deeply admiring Bernstein and (b) only really paying attention to one cryptographer in the world (Bernstein).
> But talk to a bunch of other cryptographers --- and, also, learn about the work a lot of other cryptographers are doing --- and you're going to hear stories. I'm not going to say Bernstein has a bad reputation; for one thing, I'm not qualified to say that, and for another I don't think "bad" is the right word. So I'll put it this way: Bernstein has a fucked up reputation in his field. I am not at all happy to say that, but it's true.
---
> What's annoying is that [Bernstein is] usually right, and sometimes even right in important new ways. But he runs the ball way past the end zone. Almost everybody in the field agrees with the core things he's saying, but almost nobody wants to get on board with his wild-eyed theories of how the suboptimal status quo is actually a product of the Lizard People.
I don't think the "these finalist teams are trustworthy" argument is completely watertight. If the US wanted to make the world completely trust and embrace subtly-broken cryptography, a pretty solid way to do that would be to make competition where a whole bunch of great, independent teams of cryptography researchers can submit their algorithms, then have a team of excellent NSA cryptographers analyze them and pick an algorithm with a subtle flaw that others haven't discovered. Alternatively, NIST or the NSA would just to plant one person on one of the teams, and I'm sure they could figure out some clever way to subtly break their team's algorithm in a way that's really hard to notice. With the first option, no participant in the competition has to that there's any foul play. In the second, only a single participant has to know.
Of course I'm not saying that either of those things happened, nor that they would be easy to accomplish. Hell, maybe they're literally impossible and I just don't understand enough cryptography to know why. Maybe the NIST truly has our best interest at heart this time. I'm just saying that, to me, it doesn't seem impossible for the NIST to ensure that the winner of their cryptography contests is an algorithm that's subtly broken. And given that there's even a slight possibility, maybe distrusting the NIST recommendations isn't a bad idea. They do after all have a history of trying to make the world adopt subtly broken cryptography.
I hope he finds all sorts of crazy documents from his FOIA thing. FOIA lawsuits are a very normal part of the process (I've had the same lawyers pry loose stuff from my local municipality). I would bet real money against the prospect of him finding anything that shakes the confidence of practicing cryptography engineers in these standards. Many of the CRYSTALS team members are quite well regarded.
My reading is that he's a combative academic, railing against a standards body that refuses to say how they're working, with a deserved reputation for dishonesty and shenanigans.
Love the narrative style of this writing second-guessing the erroneous
thought processes. Are they deceptive? Who knows.
What worries me is that it's neither malice nor incompetence, but that
a new darker force has entered our world even at those tables with the
highest stakes.... dispassion and indifference.
It's hard to get good people these days. A lot of people stopped
caring. Even amongst the young and eager. Whether it's climate change,
the world economic situation, declining education, post pandemic
brain-fog, defeat in the face of AI, chemicals in the
water.... everywhere I sense a shrug of slacking off, lying low, soft
quitting, and generally fewer fucks are given all round.
Maybe that's just my own fatigue, but in security we have to vigilant
all the time and there's only so much energy humans can bring to
that. That's why I worry that we will lose against AI. Not because
it's smarter, but because it doesn't have to _care_, whereas we do.
This apathy is an interesting phenomenon, let's not ignore it. The Internet has brought us a wealth of knowledge but it has also shown us how truly chaotic the world really is. And negativity is a profitable way to drive engagement, so damn near everyone can see how problematic our society is. And when the algorithm finds something you care to be sad about, it will show you more, more, and ever more all the way into depression.
This is the lasting legacy of the Internet, now. Not freedom for all to seek and learn, but freedom for the negativity engines to seek out your brain and suck you into personal obliteration.
A society of good people? Nobody really cares any more. And I do agree with the gp; if you look, you can see it everywhere. What is this going to become? Collective helplessness as we eek out what little bits of personal fulfillment we can get in between endless tragedy and tantalizing promise?
Unfortunately, the NSA & NIST most likely is recommending a quantum-proof security that they've developed cryptanalysis against, either through high q-bit proprietary technology or specialized de-latticing algorithms .
The NSA is very good at math, so I'm be thoroughly surprised if this analysis was error by mistake rather than error through intent.
The NSA also has a mission-based interest in _breaking_ other people's crypto though, which is generally known.
Which is generally known, so I'm surprised by your argument. Even if the NSA knows more than they are telling us, this doesn't result in most of us feeling less worried, as their ends may not be strengthening the public's cryptography!
Also, we still to this day do not know where the seed for P256 and P384 came from. And we're using that everywhere. There is a non-zero chance that the NSA basically has a backdoor for all NIST ECC curves, and no one actually seems to care.
I just find it sad that it's things like these that make it impossible for the layman to figure out what is going on with, for example, Mochizuki's new stuff
I have no reason to doubt that a lot of math has been made more difficult than necessary just because it is known to give a subtle military advantage in some cases, but this isn't new;
I'm stuck on trying to work out what it would mean to de-lattice something. Would that transform a lattice basis into a standard vector space basis in R or something, or, like MOV, would it send the whole lattice to an element of some prime extension field?
In my mind's eye, it's cooler: it's like, you render the ciphertext as a raster image, and then "de-lattice" it to reveal the underlying plaintext, scanline by scanline.
Somebody would leak or steal that as it would be a GIGANTIC leap forward in our engineering skill at the quantum level.
Getting more than a handful of qubits to stay coherent and not collapse into noise is a huge research problem right now, and progress has been practically non-existent for almost a decade.
Assuming djb is correct and the current process is broken... is trying to expose it and then fix it through FOIA requests really the best approach?
If your codebase is hairy enough, and the problem to be solved is fundamentally fairly simple, sometimes it's better to rewrite than refactor. Doubly so if you believe a clever adversary has attempted to insert a subtle backdoor or bugdoor.
What would a better crypto selection process look like? I like the idea of incorporating "skin in the game" somehow... for example, the cryptographer who designs the scheme could wager some cash that it won't be broken within a particular timeframe. Perhaps a philanthropist could offer a large cash prize to anyone who's able to break the winning algorithm. Etc.
Taking money from the cryptographers offers the exact opposite incentive that you want it to: your NSA black budget slush fund has orders of magnitude more spending power than anybody honest could hope to acquire.
An important detail you really want to understand before reading this is that NIST (and NSA) didn't come up with these algorithms; they refereed a competition, in which most of the analysis was done by competitors and other academics. The Kyber team was Roberto Avanzi, Joppe Bos, Léo Ducas, Eike Kiltz, Tancrède Lepoint, Vadim Lyubashevsky, John M. Schanck, Gregor Seiler, Damien Stehlé, and also Peter Schwabe, a collaborator of Bernstein's.
Absolutely, but NIST ultimately choose the winners, giving them the option to pick (non-obviously) weak/weaker algorithms. Historically only the winners are adopted. Look at the AES competition - how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael by most accounts?
> Historically only the winners are adopted. Look at the AES competition
Often, yes. But also consider the SHA-3 competition.
BLAKE2 seems more widely used than what was chosen for SHA-3 (Keccak). What was submitted for the SHA-3 competition was BLAKE1 (it didn't have a number back then but I think this is clearer) so it's not like NIST said that Keccak is better than BLAKE2, they only said it's better than BLAKE1 (per their requirements, which are unlikely to align with your requirements because of the heavy weighing of speed-in-hardware), but still this is an example of a widely used algorithm that is not standardized.
> how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael
The goal of an encryption algorithm is not only to be secure. Sure, that has to be a given: nobody is going to use a broken algorithm when given a choice. But when you have two secure options, the more efficient one is the one to choose. You could use a 32k RSA key just to be sure, or a 4k RSA key which (to the best of my knowledge) everyone considers safe until quantum. (After quantum, you need something like a 1TB key, as djb humorously proposed.)
Wikipedia article on Serpent: "The 32 rounds mean that Serpent has a higher security margin than Rijndael; however, Rijndael with 10 rounds is faster and easier to implement for small blocks."
I don't know that nobody talks about Serpent solely because it was not chosen as winner. It may just be that Rijndael with 256-bit keys is universally considered secure and is more efficient at doing its job.
10 replies →
I fully admit to having a weak spot for Serpent - it is self-bitslicing (see the submission package or the linux kernel tree), which in hindsight makes constant time software easier to write, and it was faster in hardware even when measured at the time, which is where we have ended up putting AES anyway (e.g. AES-NI etc).
BUT. On security margins, you could argue the Serpent designers were too conservative: https://eprint.iacr.org/2019/1492 It is also true that cryptanalytic attacks appear to fare slightly better against AES than Serpent. What does this mean? A brute force attack has the same number of operations as the claimed security level, say, 2^128 for 128-bit. An attack is something better than this: fewer operations. All of the attacks we know about achieve slightly less than this security level - which is nonetheless still impossible to do - but that comes at a cost: they need an infeasible amount of memory. In terms of numbers: 9000 TB to reduce 2^128 to 2^126 against full-round AES according to a quick check of wikipedia. For reference, the lightweight crypto competition considered 2^112 to be sufficient margin. 2^126 is still impossible.
In practice, the difference between Serpent and AES in terms of cryptanalytic security is meaningless. It is not an example of NIST picking a weaker algorithm deliberately, or I would argue, even unintentionally. It (AES) was faster when implemented in software for the 32-bit world that seemed to be the PC market at the time.
2 replies →
Blowfish has a continuing existence as the basis for bcrypt.
4 replies →
Correct me if I'm wrong, everything is also being done out in the open for everyone to see. The NIST aren't using some secret analysis to make any recommendations.
Teams of cryptographers submit several proposals (and break each other's proposals). These people are well respected, largely independent, and assumed honest. Some of the mailing lists provided by NIST where cryptographers collaborated to review each other's work are public
NIST may or may not consort with your friendly local neighborhood NSA people, who are bright and talented contributors in their own right. That's simply in addition to reading the same mailing lists
At the end, NIST gets to pick a winner and explain their reasonning. What influenced the decision is surely a combination of things, some of which may be internal or private discussions
12 replies →
My rule of thumb in these situations is always: if they could, they would.
I've seen enough blatant disregard for humanity to assume any kind of honesty in the powers that were.
2 replies →
> everything is also being done out in the open for everyone to see
Well, everything apart from the secret stuff:
"I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.
That lawsuit has been gradually revealing secret NIST documents, shedding some light on what was actually going on behind the scenes, including much heavier NSA involvement than indicated by NIST's public narrative"
2 replies →
There is a final standardization step where NIST selects constants, and this is done without always consulting with the research team. Presumably, these are usually random, but the ones chosen for the Dual-EC DRBG algorithm seem to have been compromised. SHA-3 also had some suspicious constants/padding, but that wasn't shown to be vulnerable yet.
9 replies →
You don't really know, but you can be reasonably sure that they didn't sabotage the submissions themselves.
The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position. NIST is staffed by normal people who probably view aggressively worded requests for clarification in the same way that most of us have probably fielded aggressively worded bug reports.
Adding accusatory hyperbolic statements like: "You exposed three years of user data to attackers by telling people to use Kyber starting when your patent license activates in 2024, rather than telling people to use NTRU starting in 2021!" doesn't help. Besides the fact that nobody is deploying standalone PQ for some time, there were several alternatives that NIST could have suggested in 2021. How about SIKE? That one was pretty nice until it was broken last year.
Unfortunately, NIST doesn't have a sterling reputation in this area, but if we're going to cast shade on the algorithm and process, a succinct breakdown of why, along with a smoking gun or two would be great. Pages and pages of email analysis, comparison to (only) one other submission, and accusations that everyone is just stalling so data can be vacuumed up because it is completely unprotected makes it harder to take seriously. If Kyber-512 is actually this risky, then it deserves to be communicated clearly.
This is 100% in line my reading of the submission.
Also noting that the page contains seventeen thousand words. That many words of harry potter take an average person 70 minutes to read. This text is no harry potter: it's chock-full of numbers, things to consider, and words and phrasings to weigh (like when quoting NIST), so you're not going to read it as fast as an average book, if you know enough about PQC to understand the text in the first place.
I even got nerdsniped near the beginning into clicking on "That lawsuit has been gradually <revealing> secret NIST documents, shedding some light on what was actually going on behind the scenes". That page (linked by the word <revealing>) is another 54000 words. Unaware, due to not having a scroll bar on mobile (my fault, I know), I started skimming it linearly to see what those revelations might be. Nothing really materialized. At some point I caught on that I seemed to have enrolled for a PhD research project and closed that tab to continue reading the original page...
Most HN readers, who are often smart and highly technical but in various fields, cannot reasonably weigh and interpret the techobabble evidence for "nist=bad". Being in an adjacent field, I would guess that I understand more than the average reader, but still don't feel qualified to judge this material without really giving it a thorough read. The page reasonably gives context and explains acronyms, but there's just so much of it that I can't imagine anyone who doesn't already know would want to bother with it. Not everyone understanding a submission is okay, but this is about accusations, and that makes me feel like it is not a good submission for HN.
HN readers that don't want to read the piece in full can take solace in that PQC has not been proven viable. Thus, what algorithms we should use to protect ourselves once what we thought was intractable becomes tractable may be a moot point. Shor's algorithm is capable of factoring 21 into 7 x 3. That's a long way off from factoring the thousands of digits-long numbers used for modern cryptography.
2 replies →
Edit: Just realized the author is djb, Daniel Bernstein, which I guess is semi-ironic for me because I was recently praising him on HN for an old, well-read blog post on ipv6. Thus, I guess I may take back a bit of what I said below, or least perhaps it would be better to say that I can better understand the adversarial tone given djb's history with NIST recommendations (more info at https://en.wikipedia.org/wiki/Daniel_J._Bernstein#Cryptograp...).
> The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position.
Couldn't agree with this more. I don't like to harp on form over substance, but in this case the form of this blog post was so bad I had difficulty evaluating whether the substance was worthwhile. I'm not in the field of cryptography, so I'm not qualified to assess on the merits, but my thoughts reading this were:
1. All the unnecessary snark and disparagement made me extremely wary of the message. It seemed like he was making good points, but the overall tone was similar to those YouTube "WhaT ThE ElITe DoN'T WanT YoU TO KnoW!!" videos. Frankly, the author just sounds like kind of an asshole, even if he is right.
2. Did anyone actually read this whole thing?? I know people love to harp on "the Internet has killed our attention spans", and that may be true, but the flip side is we're bombarded with so much info now that I take a very judicious approach to where I'll spend my time. On that point, if you're writing a blog post, the relevant details and "executive summary" if you will should be in the first couple paragraphs, then put the meandering, wandering diary after. Don't expect a full read if important tidbits are hidden like Where's Waldo in your meandering diary.
I read the whole thing because of who the author was.
The executive summary is above the fold:
Take a deep breath and relax. When cryptographers are analyzing the security of cryptographic systems, of course they don't make stupid mistakes such as multiplying numbers that should have been added.
If such an error somehow managed to appear, of course it would immediately be caught by the robust procedures that cryptographers follow to thoroughly review security analyses.
Furthermore, in the context of standardization processes such as the NIST Post-Quantum Cryptography Standardization Project (NISTPQC), of course the review procedures are even more stringent.
The only way for the security claims for modern cryptographic standards to turn out to fail would be because of some unpredictable new discovery revolutionizing the field.
Oops, wait, maybe not. In 2022, NIST announced plans to standardize a particular cryptosystem, Kyber-512. As justification, NIST issued claims regarding the security level of Kyber-512. In 2023, NIST issued a draft standard for Kyber-512.
NIST's underlying calculation of the security level was a severe and indefensible miscalculation. NIST's primary error is exposed in this blog post, and boils down to nonsensically multiplying two costs that should have been added.
How did such a serious error slip past NIST's review process? Do we dismiss this as an isolated incident? Or do we conclude that something is fundamentally broken in the procedures that NIST is following?
> I know people love to harp on "the Internet has killed our attention spans"
Not just that. Give your parent or grandparent a 75-page booklet to read, full of accusations and snark, and let's say it's about something they care about and actually impacts their lives (maybe a local government agency, idk). What are the odds they are going to read that A-Z versus waiting for a summary or call-to-action to be put out? The latter can be expected to happen if there is actually something worthwhile in there.
This is objectively too long for casual reading, nothing to do with anyone's attention span.
(The 75-page estimate is based on: (1) a proficient reader doing about a page per minute in most books that I know of, so pages==minutes; (2) the submission being 17.6k words; (3) average reading speed is ~250 wpm, resulting in 17.6e3/250=70 minutes; (4) this is not an easy text, it has lots of acronyms and numbers, so conservatively pad to 75.)
2 replies →
> Did anyone actually read this whole thing?
Yup. I'm not a cryptographer, so I didn't understand most of the detail. I realized it ws DJB after a couple of paragraphs.
> the relevant details and "executive summary" if you will should be in the first couple paragraphs
It wasn't written for "executives".
1 reply →
Even worse, I expected to find a part when he reports it and includes the responses/follow-up from that... But this is the first time it's published a far as I understand? Did I miss it in the wall of text? Or is it really a huge initial writeup that may end up with someone responding "oh, we did mess up, didn't we? Let's think how to deal with that."
4 replies →
That's pretty selective quoting of the issues. He even says himself that the waiting for the patent is one of the minor issues.
The many questions he asks is why did they repeatedly change the evaluation criteria after the fact, presented results in a misleading ways, and made basic calculation errors (remember these guys are experts). All these in favor of one algorithm.
Now to someone like me this points to the fact that they really wanted that algorithm to be the standard. If we add to that the fact that there was significantly more NSA involvement than indicated and that they did their best to hide this, leads me to be extremely skeptical of the standard.
Because someone likely stood to benefit from it. The question is who and how?
> If Kyber-512 is actually this risky, then it deserves to be communicated clearly.
The statement djb seems to be making: It is not known if Kyber-512 is as cryptographically strong as AES-128 by the definitions provided by NIST.
This is an issue because these algorithms will be embedded within hardware soon.
> Besides the fact that nobody is deploying standalone PQ for some time
Now that an implementation has been chosen to be standardized, hardware vendors are likely to start designing blocks that can more efficiently compute the FIPS 203 standard (if they haven't already designed a few to begin with).
Given that the standard's expected publication is in 2024, and the 1-2 year review timeline for NIST CMVP review on FIPS modules, I wouldn't be surprised to see a FIPS 140-3 Hardware Module with ML-KEM (Kyber-etc.) by mid 2026.
> a succinct breakdown of why
The issue seems to be his statement from [1]: "However, NIST didn't give any clear end-to-end statements that Kyber-512 has N bits of security margin in scenario X for clearly specified (N,X)."
djb succinctly outlines the "scenario X" he referred to in [2], in which he only needs a yes or no answer. He is literally asking the people who should know and be able to discuss the matter, who would have the technical background to discuss this matter. He had received no response, which is why he had posted [1].
NIST's reply in [3] is a dismissal of [1] without a discussion of the security itself. The frustrating part for me to read was the second paragraph: "The email you cited (https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...), speaks for itself. NIST continues to be interested in people's opinions on whether or not our current plan to standardize Kyber512 is a good one. While reviewers are free, as a fun exercise, to attempt to "disprove what NIST _appears_ to be claiming about the security margin," the results of this exercise would not be particularly useful to the standardization process. NIST's prior assertions and their interpretation are not relevant to the question of whether people believe that it is a good idea to standardize Kyber512."
If NIST views the reviewers' claims about security to be "not particularly useful to the standardization process," (and remember: the reviewers are themselves cryptographers) then why should the public trust the standard at all?
> a smoking gun or two would be great
There wouldn't be a smoking gun because the lack of clarification is the issue at hand. If they could explain how they calculated the security strength of Kyber-512, then this would be a different issue.
The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements, so clarification or justification seems necessary.
[1]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...
[2]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...
[3]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...
> The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements
More to the point, (at least to my understanding) it puts it on par with another contender that was rejected from the NIST competition for being too weak a security construct.
If TFA were by a nobody I might agree, but TFA is by DJB and/or Tanja Lange, and they're not nobodies. These things need to be at least somewhat adversarial partly because that's what it takes to do cryptanalysis, and partly because of past shenanigans. It goes with the territory and the politics. It's unavoidable.
One can be combative and adversarial and still write succinctly and persuasively.
This text does DJB no favors. He comes across like a conspiracy theorist, based on the form of the content alone.
That's more of a diary than an article -- jargony, disorganized, running in circles, very hard to follow. But the information might be important regardless. There's a strong implication that NIST with help of the NSA intentionally standardized on a weak algorithm.
We all know that's possible.
But can someone who follows some of this stuff more closely explain what the play would be? I always assumed that weakening public cryptography in such a way is a risky bet, because you can't be sure that an attacker doesn't independently find out what you know. You can keep a secret backdoor key (that was the accusation when they released Dual_EC_DRBG), but you can't really hide mathematical results.
Why would they be willing to risk that here?
Why the overwhelming benefit of the doubt in an organization that has repeatedly failed expectations? I don't understand why this is even a conversation. We don't need them any more. Export restrictions are gone. What we need is a consortium to capture the attention of the hardware vendors and limit NIST and the NSA to participant status. Then if the government decides to adopt their backdoored standards, they're the only ones.
You're making an assumption that the NSA cares about the efficacy of cryptography for other people. Why would they care about that?
Because the NSA has equally well funded adversaries that would love to find a back door to the NIST standards the whole of the US government uses. Even if the highest levels of the military and government use secret squirrel super cryptography the rest is using NIST standards. It's all the boring parts of government that deposits paychecks and runs the badge readers to their offices.
> You're making an assumption that the NSA cares about the efficacy of cryptography for other people. Why would they care about that?
Hypothesis 1: because the NSA sees evidence that more efficient cryptographic algorithms are easier to crack for them.
To give some weak evidence for this: if you need brute force to crack the cipher (or hash function), a more efficient algorithm need less computation power to crack.
Hypothesis 2: A more efficient algorithm is likely to become applied in more areas than a less efficient one (think of smartcards or microcontrollers). So if the NSA finds a weakness or is capable of introducing a backdoor in it, it can decrypt a lot more data from more areas.
it's in the national security interest of the United States to have its industries use high-quality crypto
see: colonial oil pipeline hack
2 replies →
> Why would they be willing to risk that here?
Certain types of attacks basically make it so you need to have a specific private key to act as a backdoor. That's the current guess on what may be happening with the NIST ECC curves.
If so, this can be effectively a US-only backdoor for a long, long time.
I don't believe that is anybody's guess on what may be happening with the NIST ECC curves. Ordinarily, when people on HN say things like this, they're confusing Dual EC, a public key random number generator, known to be backdoored, with the NIST curve standards.
12 replies →
No, it’s really not. Ask Neal Koblitz.
NSA weakened DES from 64-bit keys to 56-bit keys. The idea was that they could be ahead in breaking it, and that by the time 56-bit keys were too weak in general then something else would replace DES. Risky? Yes, but it worked out, for some value of "worked out". So I wouldn't assume something like that wouldn't happen again.
They did that openly. What they did in secret was to harden it against an incredibly powerful attack (it's still a basis for block and hash cryptanalysis today) that nobody else knew about.
The general idea would be that they get a few years out of it before other nation/state factions discover it. The theory behind it is called “kleptography”, because the NSA is deluded enough to think that you can steal information “securely”.
It's all far too conspiratorial for me. Just show me the math as to why it's broken, I don't need a conspiratorial mind map drawing speculative lines between various topics. Do an appendix or two for that.
There's nothing conspiratorial about the post, why not read the article? The math error is described in line 2, the actual error about two screens down, highlighted in red.
Related thread from last year, with 443 comments:
https://news.ycombinator.com/item?id=32360533 ("NSA, NIST, and post-quantum crypto: my second lawsuit against the US government (cr.yp.to)")
> Discovering the secret workings of NISTPQC. I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.
As much as I generally loathe djb personally, professionally he will always have my support as he’s been consistently willing to take the federal government to task in court. It brings me great joy to see he’s still at it.
Why do you dislike him personally?
https://news.ycombinator.com/item?id=13891900
Might be because he’s a bit of a “Linus” in crypto with the same ego and temper.
However, the man has done so much to advance privacy and cryptography I think he’s earned the right to be a bit snippy, especially when he’s discussing something so complex 99% of the comments are “too long to read” and “I read it but I still don’t understand it”.
Notwithstanding DJB's importance to cryptography, and the fact that I'm ignorant of a large number of details here, there was a point where he lost a lot of credibility with me.
Specifically, when he gets to the graphs, he says "NIST chose to deemphasize the bandwidth graph by using thinner red bars for it." That is just not proven by his evidence, and there is a very plausible explanation for it. The graph that has the thinner bars is a bar chart that has more data points than the other graph. Open up your favorite charting application, and observe the difference in a graph that has 12 data points versus one with 9... of course the one with 12 data points has thinner lines! At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.
In the next bullet point, he complains that they're not using a log scale for the graph... where everything is in the same order of magnitude. That doesn't sound like a good use case for log scale, and I'm having a hard time trying to figure out why it might be justified in this case.
Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.
>At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.
Given the long and detailed history of various governments and government agencies purposefully attempting to limit the public from accessing strong cryptography, I tend to agree with the "assume malice by default" approach here. Assuming anything else, to me at least, seems pretty naive.
Eh, it goes both ways. Back in the 1970's and 1980's there was a whole lot of suspicion about changes that the NSA made to DES S-boxes with limited explanation- was it a backdoor in some way? Then in 1989 white hats "discovered" differential cryptography, and realized that the changes that were made to the algorithm actually protected it from a then-unknown (to the general public) cryptographic attack. Differential cryptography worked beautifully on some other popular cryptosystems of the era, e.g. the FEAL-4 cipher could be broken with just 8 plaintext examples, while DES offered protection up to 2^47 chosen plaintexts.
The actual way that the NSA had tried to limit DES was to cap its key length at 48 bits, figuring that their advantage in computing power would let them brute force it when no one else could. (NIST compromised between the NSA's desire for 48 and the rest of the world's desire for 64, which was why DES had the always bizarre 56 bit key.) So sometimes they strengthen it, sometimes they weaken it, and so I'm not sure it appropriate to presume malice.
2 replies →
There's a meaningful difference between assuming an actor is malicious or untrustworthy and going out of your way to provide the maximally malicious interpretation of each of their actions. As a matter of rhetoric, the latter tends to give the impression of a personal vendetta.
DJB has lost a ton of credibility already within the non-government cryptography community for his frankly unhinged rants on the PQC mailing list.
If you read his posts there, it’s hard not to come away with the impression that he’s just upset his favourite scheme wasn’t chosen.
2 replies →
If you continue reading, you'll find that they aren't responding to requests for clarification on their hand-waving computations. Suspicion is definitely warranted.
> Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.
There isn't a lot of people in the world with the technical know-how for cryptography. It's clear that competitors in this space are going to be reviewing eachothers work.
Yes, that was the premise of the competition, and was in fact what happened.
Sure, but this was just a weird thing to hone in on.
FWIW, there are two NTRUs: the original one, which had no djb involvement, and NTRU Prime, which does.
Yeah. It does honestly sound like he looked at the options and decided that this one was the best, then he started contributing.
Something I've learned from a career of watching cryptographer flame wars: Don't bet against Bernstein, and don't trust NIST.
http://web.archive.org/web/20231003195013/https://blog.cr.yp...
https://archive.ph/NrOG6
NIST responded: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/W2VO...
I'm not sure N(IST)SA has any credibility left. Polularity of curve 25519 over their P curves is encouraging and it would be great to see the community continue this direction and largely ignore them going forward. The government shouldn't be leading or deciding, it would be better organized around gathering current consensus and following when it comes to FIPS, regulation, etc.
The NIST standardization process, appears to have a grey area particularly around the selection of constants.
The skepticism around standardization, advocating instead for direct adoption from cryptographers, sheds light on potential shortcomings in the current system.
There is definitely a need for a more transparent or open scrutiny in algorithm standardization to ensure security objectives are met.
Related note: Government employees (including military, intel) are just people, and worse, bureaucrats. They aren't magical wizards who can all do amazing things with mathematics and witchcraft. If they were good at what they do, they wouldn't need ever increasing funding and projects to fix things.
Cryptanalysis and encryption are somewhat of an exception to this. There are some extremely smart people who work in these areas for the government, precisely because funding and application is on a different scale.
Very few folks except the gov’t have real existential need for best in breed crypto, frankly.
My takeaway (impression) from the DJB post is that the evaluation by the NISTPQC seems not to provide algorithms with a firm level of security. That the evaluation is not clear cut, and not provide a good, conservative lower bound for the security provided by the algorithms selected.
"Security is supposed to be job #1. So I recommend eliminating Kyber-512."
It would be interesting to see Signal Sciences response to this Bernstein’s post
Signal seems to use Kyber-1024, which does meet the NIST contest's security criteria.
I wrote some more details here: https://community.signalusers.org/t/signal-blog-quantum-resi...
Who is Signal Sciences?
Actually, I meant Open Whisper, the company behind Signal.
Got my wires crossed.
1 reply →
Minor typo. "How can NIST justify throwing NIST-509 away?" should be "How can NIST justify throwing NTRU-509 away?"
Scorpions and frogs as usual.
If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.
However, this man is one of the foremost cryptographers in the world, he has basically single-handedly killed US government crypto export restrictions back in the days, and (not least of all because of Snowden) we know that the NSA really is trying to sabotage cryptography.
Also, he basically founded the field of post-quantum cryptography.
Is NIST trying to derail his work by standardizing crappy algorithms with the help of the NSA? Who knows. But to me it does smell like that.
Bernstein has a history of being right, and NIST and the NSA have a history of sabotaging cryptographic standards (google Dual_EC_DRBG if you don't know the story).
This comment is factually incorrect on a number of levels.
1) single-handedly killed US government crypto export restrictions - Bernstein certainly litigated, but was not the sole actor in this fight. For example, Phil Zimmerman, the author of PGP, published the source code of PGP as a book to work around US export laws, which undoubtedly helped highlight the futility of labelling open source software as a munition: https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...
2) Bernstein "founded" the field of post quantum cryptography: Uh. Ok. That's not how academia works. Bernstein was certainly an organiser of the first international workshop on post quantum cryptography, but that's not the same as inventing a field. Many of the primitives that are now candidates were being published long before this, McEliece being one of the oldest, but even Atjai's lattice reductions go back to '97.
3) The dual_ec rng was backdoored (previously read was and is fishy, poor wording on my part), but nobody at the time wanted NIST to standardize it because it was a _poor PRNG anyway_: slow and unnecessarily complicated. Here is a patent from Scott Vanstone on using DUAL_EC for "key escrow" which is another way of saying "backdoor": https://patentimages.storage.googleapis.com/32/9b/73/fe5401e... - filed in 2006. In case you don't know Scott Vanstone, he's the founder of Certicom. So at least one person noticed. This was mentioned in a blog post as a result of the Snowden leaks working out how the backdoor happened: https://blog.0xbadc0de.be/archives/155
NSA have been caught in a poor attempt to sabotage a standard that nobody with half a brain would use. On the other hand NSA also designed SHA-2, which you are likely using right now, and I'm not aware of anyone with major concerns about it. When I say NSA designed it, I don't mean "input for a crypto competition" - a team from the NSA literally designed it and NIST standardized it, which is not the case for SHA-3, AES or the current PQC process.
DJB is a good cryptographer, better than me for sure. But he's not the only one - and some very smart, non-NSA, non-US-citizen cryptographers were involved in the design of Kyber, Dilithium, Falcon etc.
Dual EC is virtually certain to be a backdoor.
I had the same take on Dual EC prior to Snowden. The big revelation with Snowden wasn't NSA involvement in Dual EC, but rather that (1) NSA had intervened to get Dual EC defaulted-on in RSA's BSAFE library, which was in the late 1990s the commercial standard for public key crypto, and (2) that major vendors of networking equipment were --- in defiance of all reason --- using BSAFE rather than vetted open-source cryptography libraries.
DJB probably did invent the term "post-quantum cryptography". For whatever that's worth.
8 replies →
Bernstein is often right, despite the controversy around the Gimli permutation.
In this particular case it's worth noting that neither BSI (Germany) nor NLNCSA (The Netherlands) recommend Kyber.
Unfortunately, alternative algorithms are more difficult to work with due to their large key sizes among other factors, but it's a price worth paying. At Backbone we've opted not to go down the easy route.
> If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.
> However, this man is one of the foremost cryptographers in the world […]
It's possible to be both (not saying Bernstein is).
Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.
> Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.
Kaczynski dropped out of society to live in a cabin alone at 29. He delivered his first bomb at 35. I'm not sure this is a reasonable comparison to invoke in any way whatsoever.
When DJB starts posting about the downfall of modern society from his remote cabin in Montana, perhaps, but as far as I know he's still an active professor working from within the University system.
1 reply →
There was a smart guy once who went crazy. We should assume smart people are crazy.
2 replies →
Due to likely CIA sponsored mental abuse (MKULTRA), absurdly.
>If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.
Can I point out that Ted Kaczynski was also actually a mathematical prodigy, having been accepted into Harvard on a scholarship at 16?
If you want, sure, but I think the reason he was mentioned with a negative connotation might be more to do with the murders he committed.
An interesting set of comments (by tptacek) from a thread in 2022 (I wonder if they still hold the same opinion in light of this latest post on NIST-PQC by djb):
> The point isn't that NIST is trustworthy. The point is that the PQC finalist teams are comprised of academic cryptographers from around the world with unimpeachable reputations, and it's ludicrous to suggest that NSA could have compromised them. The whole point of the competition structure is that you don't simply have to trust NIST; the competitors (and cryptographers who aren't even entrants in the contest) are peer reviewing each other, and NIST is refereeing.
> What Bernstein is counting on here is that his cheering section doesn't know the names of any cryptographers besides "djb", Bruce Schneier, and maybe, just maybe, Joan Daemen. If they knew anything about who the PQC team members were, they'd shoot milk out their nose at the suggestion that NSA had suborned backdoors from them. What's upsetting is that he knows this, and he knows you don't know this, and he's exploiting that.
---
> I spent almost 2 decades as a Daniel Bernstein ultra-fan --- he's a hometown hero, and also someone whose work was extremely important to me professionally in the 1990s, and, to me at least, he has always been kind and cheerful... I know what it's like to be in the situation of (a) deeply admiring Bernstein and (b) only really paying attention to one cryptographer in the world (Bernstein).
> But talk to a bunch of other cryptographers --- and, also, learn about the work a lot of other cryptographers are doing --- and you're going to hear stories. I'm not going to say Bernstein has a bad reputation; for one thing, I'm not qualified to say that, and for another I don't think "bad" is the right word. So I'll put it this way: Bernstein has a fucked up reputation in his field. I am not at all happy to say that, but it's true.
---
> What's annoying is that [Bernstein is] usually right, and sometimes even right in important new ways. But he runs the ball way past the end zone. Almost everybody in the field agrees with the core things he's saying, but almost nobody wants to get on board with his wild-eyed theories of how the suboptimal status quo is actually a product of the Lizard People.
(https://news.ycombinator.com/item?id=32365679)
I don't think the "these finalist teams are trustworthy" argument is completely watertight. If the US wanted to make the world completely trust and embrace subtly-broken cryptography, a pretty solid way to do that would be to make competition where a whole bunch of great, independent teams of cryptography researchers can submit their algorithms, then have a team of excellent NSA cryptographers analyze them and pick an algorithm with a subtle flaw that others haven't discovered. Alternatively, NIST or the NSA would just to plant one person on one of the teams, and I'm sure they could figure out some clever way to subtly break their team's algorithm in a way that's really hard to notice. With the first option, no participant in the competition has to that there's any foul play. In the second, only a single participant has to know.
Of course I'm not saying that either of those things happened, nor that they would be easy to accomplish. Hell, maybe they're literally impossible and I just don't understand enough cryptography to know why. Maybe the NIST truly has our best interest at heart this time. I'm just saying that, to me, it doesn't seem impossible for the NIST to ensure that the winner of their cryptography contests is an algorithm that's subtly broken. And given that there's even a slight possibility, maybe distrusting the NIST recommendations isn't a bad idea. They do after all have a history of trying to make the world adopt subtly broken cryptography.
10 replies →
I hope he finds all sorts of crazy documents from his FOIA thing. FOIA lawsuits are a very normal part of the process (I've had the same lawyers pry loose stuff from my local municipality). I would bet real money against the prospect of him finding anything that shakes the confidence of practicing cryptography engineers in these standards. Many of the CRYSTALS team members are quite well regarded.
> actually a product of the Lizard People
Nobody says that (not that I've seen).
My reading is that he's a combative academic, railing against a standards body that refuses to say how they're working, with a deserved reputation for dishonesty and shenanigans.
1 reply →
This also skips his pioneering work into microservice architecture, as exemplified by the structure of qmail, djbdns, and daemontools.
Bernstein did not “found” the field of PQC. He wasn’t even doing cryptography when this field was founded!
Also, the schemes he’s railing against are also the work of top cryptographers in the space.
Love the narrative style of this writing second-guessing the erroneous thought processes. Are they deceptive? Who knows.
What worries me is that it's neither malice nor incompetence, but that a new darker force has entered our world even at those tables with the highest stakes.... dispassion and indifference.
It's hard to get good people these days. A lot of people stopped caring. Even amongst the young and eager. Whether it's climate change, the world economic situation, declining education, post pandemic brain-fog, defeat in the face of AI, chemicals in the water.... everywhere I sense a shrug of slacking off, lying low, soft quitting, and generally fewer fucks are given all round.
Maybe that's just my own fatigue, but in security we have to vigilant all the time and there's only so much energy humans can bring to that. That's why I worry that we will lose against AI. Not because it's smarter, but because it doesn't have to _care_, whereas we do.
Bad systems beat good people.
There are a lot of symptoms to distract yourself with. Focus on the game instead.
A society full of good people will sort out the rest.
This apathy is an interesting phenomenon, let's not ignore it. The Internet has brought us a wealth of knowledge but it has also shown us how truly chaotic the world really is. And negativity is a profitable way to drive engagement, so damn near everyone can see how problematic our society is. And when the algorithm finds something you care to be sad about, it will show you more, more, and ever more all the way into depression.
This is the lasting legacy of the Internet, now. Not freedom for all to seek and learn, but freedom for the negativity engines to seek out your brain and suck you into personal obliteration.
A society of good people? Nobody really cares any more. And I do agree with the gp; if you look, you can see it everywhere. What is this going to become? Collective helplessness as we eek out what little bits of personal fulfillment we can get in between endless tragedy and tantalizing promise?
3 replies →
The car is on fire and there is no driver at the wheel.
Bollocks to the car. It's the rest of us innocent pedestrians who need to take cover. :)
Unfortunately, the NSA & NIST most likely is recommending a quantum-proof security that they've developed cryptanalysis against, either through high q-bit proprietary technology or specialized de-latticing algorithms .
The NSA is very good at math, so I'm be thoroughly surprised if this analysis was error by mistake rather than error through intent.
The NSA also has a mission-based interest in _breaking_ other people's crypto though, which is generally known.
Which is generally known, so I'm surprised by your argument. Even if the NSA knows more than they are telling us, this doesn't result in most of us feeling less worried, as their ends may not be strengthening the public's cryptography!
Yes: https://en.wikipedia.org/wiki/Dual_EC_DRBG
Also, we still to this day do not know where the seed for P256 and P384 came from. And we're using that everywhere. There is a non-zero chance that the NSA basically has a backdoor for all NIST ECC curves, and no one actually seems to care.
2 replies →
Isn't that what the person you're replying to said?
1 reply →
I just find it sad that it's things like these that make it impossible for the layman to figure out what is going on with, for example, Mochizuki's new stuff
I have no reason to doubt that a lot of math has been made more difficult than necessary just because it is known to give a subtle military advantage in some cases, but this isn't new;
[flagged]
1 reply →
"High q-bit proprietary technology" and "specialized de-latticing algorithms" are made up terms that nobody uses.
I'm stuck on trying to work out what it would mean to de-lattice something. Would that transform a lattice basis into a standard vector space basis in R or something, or, like MOV, would it send the whole lattice to an element of some prime extension field?
In my mind's eye, it's cooler: it's like, you render the ciphertext as a raster image, and then "de-lattice" it to reveal the underlying plaintext, scanline by scanline.
3 replies →
Just bounce a graviton particle beam of the main deflector dish.
> through high q-bit proprietary technology
Somebody would leak or steal that as it would be a GIGANTIC leap forward in our engineering skill at the quantum level.
Getting more than a handful of qubits to stay coherent and not collapse into noise is a huge research problem right now, and progress has been practically non-existent for almost a decade.
"Specialized de-latticing algorithms"?
[dead]
[dead]
[dead]
Assuming djb is correct and the current process is broken... is trying to expose it and then fix it through FOIA requests really the best approach?
If your codebase is hairy enough, and the problem to be solved is fundamentally fairly simple, sometimes it's better to rewrite than refactor. Doubly so if you believe a clever adversary has attempted to insert a subtle backdoor or bugdoor.
What would a better crypto selection process look like? I like the idea of incorporating "skin in the game" somehow... for example, the cryptographer who designs the scheme could wager some cash that it won't be broken within a particular timeframe. Perhaps a philanthropist could offer a large cash prize to anyone who's able to break the winning algorithm. Etc.
Taking money from the cryptographers offers the exact opposite incentive that you want it to: your NSA black budget slush fund has orders of magnitude more spending power than anybody honest could hope to acquire.