The most important point is buried at the bottom of the page:
> all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
Using a hybrid scheme ensures that you're not actually losing any security compared to the pre-quantum implementation.
Hybrid schemes give you improved security against algorithmic flaws. If either algorithm being used is broken, the other gives you resilience. But hybrid schemes also double (or more) your exposure to ordinary implementation bugs and side-channels.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
Unless the implementation bug is severe enough to give RCE, memory dumping, or similar, I don't see how a bug in the MLKEM implementation (for example) would be able to leak the x25519 secret, even with sidechannels. A memory-safe impl would almost guarantee you don't have any bugs of the relevant classes (I know memory-safe != sidechannel-safe, but I don't see how sidechannels would be relevant). You still need to break need both to break the whole scheme.
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
What kinds of side channels are you thinking of? Given the key exchanges have a straightforward sha256/sha512 combiner, it would be surprising that a flaw in one of the schemes would give a real vulnerability?
I could see it being more of a problem for signing.
The industry definitely seems to be going in this hybrid PQC-classical direction for the most part. At least until we know there's a real quantum computer somewhere that renders the likes of RSA, ECC, and DH no longer useful, it seems this conservative approach of using two different types of locks in parallel might be the safest bet for now.
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
In light of the recent hilarious paper around the current state of quantum cryptography[1], how big is the need for the current pace of post quantum crypto adoption?
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
The page only talks about adopting PQC for key agreement for SSH connections, not encryption in general so the overhead would be rather minimal here. Also from the FAQ:
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics.
If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
>it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics
I've heard this 15 years ago when I started university. People claimed all the basics were done, that we "only" needed to scale. That we would see practical quantum computers in 5-10 years. Today I still see the same estimates. Maybe 5 years by extreme optimists, 10-20 years by more reserved people. It's the same story as nuclear fusion. But who's prepping for unlimited energy today? Even though it would make sense to build future industrial environments around that if they want to be competitive.
It's been "engineering challenges" for 30 years. At some point, "engineering challenges" stops being a good excuse, and that point was about 20 years ago.
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
Those are two odd questions to even ask/answer as first quantum computers exist and secondly, we have them on a certain scale. I assume what they mean is at a scale to do calculations that surpass existing classical calculations.
That paper is hilarious, and is correct that there's plenty of shit to make fun of... but there's also progress. I recommend watching Sam Jacques' talk from PQCrypto 2025 [0]. It would be silly to delay PQC adoption because of focusing on the irrelevant bad papers.
In the past ten years, on the theory side, the expected cost of cryptographically relevant quantum factoring has dropped by 1000x [1][2]. On the hardware side, fault tolerance demonstrations have gone from repetition code error rates of 1% error per round [3] to 0.00000001% error per round [fig3a of 4], with full quantum codes being demonstrated with an error rate of 0.2% [fig1d of 4] via a 2x reduction in error each time distance is increased by 2.
If you want to track progress in quantum computing, follow the gradual spinup of fault tolerance. Noise is the main thing blocking factoring of larger and larger numbers. Once the quality problem is turned into a quantity problem, then those benchmarks can start moving.
As a number of people have observed, what's happening now is mostly about key establishment, which tends to happen relatively infrequently, and so the overhead is mostly not excessive. With that said, a little more detail:
- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.
- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).
- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well
>In light of the recent hilarious paper around the current state of quantum cryptography
I assumed that paper was intended as a joke. If it's supposed to be serious criticism of the concept of quantum computing then it's pretty off-base, akin to complaining that transistors couldn't calculate Pi in 1951.
> how big is the need for the current pace of post quantum crypto adoption?
It comes down to:
1) do you believe that no cryptographically-relevant quantum computer will be realised within your lifespan
2) how much you value the data that are trusting to conventional cryptography
If you believe that no QC will arrive in a timeframe you care about or you don't care about currently-private data then you'd be justified in thinking PQC is a waste of time.
OTOH if you're a maintainer of a cryptographic application, then IMO you don't have the luxury of ignoring (2) on behalf of your users, irrespective of (1).
Besides what's public knowledge, I tend to put a bit of stock in our intelligence agency calling for PQ adoption for systems that need to remain confidential for 20 years or more
I don't want my government to keep secrets for 20 years. There is nothing I am OK with them doing that they can't be generally open about in time. Ex. the MLK files. No justification for the courts saying that the FBI files regarding MLK have to be kept under lock and key for 50 years.
That's just a fun joke paper deflating some of the more aggressive hype around QC. You shouldn't use it for making security and algorithm adoption decisions.
> After our successful factorisation using a dog, we were delighted to learn that scientists have now discovered evidence of quantum entanglement in other species of mammals such as sheep [32]. This would open up an entirely new research field of mammal-based quantum factorisation. We hypothesise that the production of fully entangled sheep is easy, given how hard it can be to disentangle their coats in the first place. The logistics of assembling the tens of thousands of sheep necessary to factorise RSA-2048 numbers is left as an open problem.
The paper is a joke, but Gutmann does make some useful, non-joke suggestions in section 7. There's probably room for a serious, full-length paper on quantum factorization evaluation criteria.
> As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms
This is somewhat correct, but needs some nuance.
First, the problem is bigger with signatures, which is why nobody is happy with the current post quantum signature schemes and people are working on better pq signature schemes for the future. But signatures aren't an urgent issue, as there is no "decrypt later" scenario for signatures.
For encryption, the overhead exists, but it isn't too bad. We are already deploying pqcrypto, and nobody seems to have an issue with it. Use a current OpenSSH and you use mlkem. Use a current browser with a server using modern libraries and you also use mlkem. I haven't heard anyone complaining that the Internet got so much slower in recent years due to pqcrypto key exchanges.
Compared to the overall traffic we use commonly these days, the few extra kb during the handshake (everything else is not affected) doesn't matter much.
I imagine the key exchange is just once per connection, right? So the overhead seems not too bad.
Especially since I think a pretty large number of computers/hostnames that are ssh'able today will probably have the same root password if they're still connected to the internet 10-20 years from now
>... which leads to huge overheads in network traffic and of course CPU time.
This is just the key exchange. You're exchanging keys for the symmetric cipher you'll be using for traffic in the session. There's really no overhead to talk about.
Indeed, I'll expand a bit: Asymmetrical crypto has always been incredibly slow compared to symmetrical crypto which is either HW accelerated (AES) or fast on the CPU (ChaCha20).
But since the symmetrical key is the same for both sides you must either share it ahead of time or use asymmetrical crypto to exchange the symmetrical keys to go brrrrr
This still greatly affects connections/second, which is an important metric. Especially since servers don't always like very long lived connections, so you may get plenty of connections during an HTTP interaction.
>As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
Eh? Public-key (asymmetric) cryptography is already very expensive compared to symmetric even under classical, that's normal, what it's used for is the vital but limited operation of key-exchange for AES or whatever fast symmetric algorithm afterwards. My understanding (and serious people in the field please correct me if I'm wrong!) is that the potential cryptographically relevant quantum computer issue threats almost 100% to key exchange, not symmetric encryption. The best theoretical search algorithm vs symmetric is Grover's which offers a square-root speed up, and thus trivially countered if necessary by doubling the key size (ie, 256-bits vs Grovers would offer 128-bits classical equivalent and 512-bits would offer 256-bits, which is already more than enough). The vast super majority of a given SSH session's traffic isn't typically handshakes unless something is quite odd, and you're likely going to have a pretty miserable experience in that case regardless. So even if the initial handshake gets made significantly more expensive it should be pretty irrelevant to network overhead, it still only happens during the initiation of a given session right?
The macOS app Secretive [1] stores SSH keys in the Secure Enclave. To make it work, they’ve selected an algorithm supported by the SE, namely ecdsa-sha2-nistp256.
I don’t think SE supports PQ algorithms, but would it be possible to use a “hybrid key” with a combined algorithm like mlkem768×ecdsa-sha2-nistp256, in a way that the ECDSA part is performed by the SE?
Not totally sure that I'm reading it right, since I've never done MacOS development before, but I'm a big fan of Secretive and use it whenever possible. If I've got it right, maybe Secretive can add PQ support once ML-KEM is out of beta.
ssh-audit [1] should be updated to test for this theoretical algo. I still get an "A" despite fixating on a specific algo and not including the quantus. I'm doing the cha-cha.
They're not the same, they're completely different:
> Additionally, all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
The 256 one is actually newer than the 512 one, too:
> OpenSSH versions 9.0 and greater support sntrup761x25519-sha512 and versions 9.9 and greater support mlkem768x25519-sha256.
We're nowhere near the point where there's any general concern regarding the sizes of 256 bits or 512 bits for hashes, block sizes, key sizes etc. Currently we don't need to consider the problem as a question of what time is required, because we don't have the electrical energy required to explore even a fraction of an unfathomably smaller 128 bit space. We don't have computers that can ingest such power either. "Relax, guy."
FIPS certification is given to an entire "cryptographic module" that includes hardware and software. "FIPS compliant OpenSSH" is therefore a misnomer, you have to certify OpenSSH running on a particular OS on particular hardware.
FIPS compliance does require use of specific algorithms. ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certified.
> you have to certify OpenSSH running on a particular OS on particular hardware
Right, but if you use the certified version of OpenSSH, it will only allow you to use certain algorithms.
> ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certifie
ML-KEM is allowed, and SHA-256 is allowed. But AFAIK, x25519 is not, although finding a definitive list is a lot more difficult for 140-3 than it was for 140-3, so I'm not positive. So I don't think (but IANAFA as well) mlkem768x25519-sha256 would be allowed, although I would expect a hybrid that used ECDSA instead of x25519 would probably be ok. But again, IANAFA, and would be happy if I was wrong.
MLKEM768 offers better performance and smaller keys, while SNTRUP761 has stronger security assumptions and better resilience against potential cryptanalysis.
NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ). You can use either, but my guess is using sntrup is going to be a little like how GPG used to default to CAST as its cipher.
NTRU Prime was written by Dan Bernstein, who also had a strong hand in the creation of ed25519 elliptic curve keys, and the chacha20-poly1305 AEAD cipher.
> NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ).
ML-KEM (originally "CRYSTALS-Kyber") was available, it's just the Tiny/OpenSSH folks decided not to choose that particular algorithm (for reasons beyond my pay grade).
NIST announced their competition in 2016 with the submission deadline being in 2017:
I’m happy to see they’re thinking ahead. There no value in disparaging efforts like this as long as the alternatives that provide better security in the future don’t make things worse.
If you need to access a server across a network you don't 100% control, you have to assume your traffic is captured and post-quantum will mean it can be decrypted. Whether that's a concern or not is another matter
This is an extremely import topic and one I'm glad is being brought up.
I come from the physical ID and anti-counterfeiting space (think passports, banknotes, etc..) there is A LOT of buzz around this and how it relates to one's digital footprint and identity. We need to think differently about how to approach encryption... math-based cryptography is becoming very vulnerable.
We're building something that even the smartest ai or the fastest quantum computer can't bypass and we need some BADASS hackers...to help us finish it and to pressure test it.
Any takers?? Reach out: cryptiqapp.com (sorry for link but this is legit collaborative and not promotional)
The most important point is buried at the bottom of the page:
> all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
Using a hybrid scheme ensures that you're not actually losing any security compared to the pre-quantum implementation.
Hybrid schemes give you improved security against algorithmic flaws. If either algorithm being used is broken, the other gives you resilience. But hybrid schemes also double (or more) your exposure to ordinary implementation bugs and side-channels.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
Unless the implementation bug is severe enough to give RCE, memory dumping, or similar, I don't see how a bug in the MLKEM implementation (for example) would be able to leak the x25519 secret, even with sidechannels. A memory-safe impl would almost guarantee you don't have any bugs of the relevant classes (I know memory-safe != sidechannel-safe, but I don't see how sidechannels would be relevant). You still need to break need both to break the whole scheme.
1 reply →
I always wondered about this claim.
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
3 replies →
What kinds of side channels are you thinking of? Given the key exchanges have a straightforward sha256/sha512 combiner, it would be surprising that a flaw in one of the schemes would give a real vulnerability?
I could see it being more of a problem for signing.
1 reply →
NSA recommends the rule-of-two, I think even before quantum resistant algorithms:
https://en.wikipedia.org/wiki/Multiple_encryption?utm_source...
The rest of the article has some stuff on what can go wrong if the implementations aren't truly independent.
So you are OK with having your data suddenly unencrypted at some point in the not-so-distant future?
It's a trade-off, yes, but that doesn't make it useless.
4 replies →
The industry definitely seems to be going in this hybrid PQC-classical direction for the most part. At least until we know there's a real quantum computer somewhere that renders the likes of RSA, ECC, and DH no longer useful, it seems this conservative approach of using two different types of locks in parallel might be the safest bet for now.
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
[0] https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...
They don't endorse hybrid constructions but they also don't ban them. From the same document:
> However, product availability and interoperability requirements may lead to adopting hybrid solutions.
In light of the recent hilarious paper around the current state of quantum cryptography[1], how big is the need for the current pace of post quantum crypto adoption?
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
[1]: https://eprint.iacr.org/2025/1237
The page only talks about adopting PQC for key agreement for SSH connections, not encryption in general so the overhead would be rather minimal here. Also from the FAQ:
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics. If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
>it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics
I've heard this 15 years ago when I started university. People claimed all the basics were done, that we "only" needed to scale. That we would see practical quantum computers in 5-10 years. Today I still see the same estimates. Maybe 5 years by extreme optimists, 10-20 years by more reserved people. It's the same story as nuclear fusion. But who's prepping for unlimited energy today? Even though it would make sense to build future industrial environments around that if they want to be competitive.
11 replies →
It's been "engineering challenges" for 30 years. At some point, "engineering challenges" stops being a good excuse, and that point was about 20 years ago.
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
15 replies →
Those are two odd questions to even ask/answer as first quantum computers exist and secondly, we have them on a certain scale. I assume what they mean is at a scale to do calculations that surpass existing classical calculations.
That paper is hilarious, and is correct that there's plenty of shit to make fun of... but there's also progress. I recommend watching Sam Jacques' talk from PQCrypto 2025 [0]. It would be silly to delay PQC adoption because of focusing on the irrelevant bad papers.
In the past ten years, on the theory side, the expected cost of cryptographically relevant quantum factoring has dropped by 1000x [1][2]. On the hardware side, fault tolerance demonstrations have gone from repetition code error rates of 1% error per round [3] to 0.00000001% error per round [fig3a of 4], with full quantum codes being demonstrated with an error rate of 0.2% [fig1d of 4] via a 2x reduction in error each time distance is increased by 2.
If you want to track progress in quantum computing, follow the gradual spinup of fault tolerance. Noise is the main thing blocking factoring of larger and larger numbers. Once the quality problem is turned into a quantity problem, then those benchmarks can start moving.
[0]: https://www.youtube.com/watch?v=nJxENYdsB6c
[1]: https://arxiv.org/abs/1208.0928
[2]: https://arxiv.org/abs/2505.15917
[3]: https://arxiv.org/abs/1411.7403
[4]: https://arxiv.org/abs/2408.13687
As a number of people have observed, what's happening now is mostly about key establishment, which tends to happen relatively infrequently, and so the overhead is mostly not excessive. With that said, a little more detail:
- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.
- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).
- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well
>In light of the recent hilarious paper around the current state of quantum cryptography
I assumed that paper was intended as a joke. If it's supposed to be serious criticism of the concept of quantum computing then it's pretty off-base, akin to complaining that transistors couldn't calculate Pi in 1951.
> how big is the need for the current pace of post quantum crypto adoption?
It comes down to:
1) do you believe that no cryptographically-relevant quantum computer will be realised within your lifespan
2) how much you value the data that are trusting to conventional cryptography
If you believe that no QC will arrive in a timeframe you care about or you don't care about currently-private data then you'd be justified in thinking PQC is a waste of time.
OTOH if you're a maintainer of a cryptographic application, then IMO you don't have the luxury of ignoring (2) on behalf of your users, irrespective of (1).
Besides what's public knowledge, I tend to put a bit of stock in our intelligence agency calling for PQ adoption for systems that need to remain confidential for 20 years or more
edit: adding in some sources
2014: "between 2030 and 2040" according to https://www.aivd.nl/publicaties/publicaties/2014/11/20/infor... (404) via https://tweakers.net/reviews/5885/de-dreiging-van-quantumcom... (Dutch)
2021: "small chance it arrives by 2030" https://www.aivd.nl/documenten/publicaties/2021/09/23/bereid... (Dutch)
2025: "protect against ‘store now, decrypt later’ attacks by 2030", joint paper from 18 countries https://www.aivd.nl/binaries/aivd_nl/documenten/brochures/20... (English)
I don't want my government to keep secrets for 20 years. There is nothing I am OK with them doing that they can't be generally open about in time. Ex. the MLK files. No justification for the courts saying that the FBI files regarding MLK have to be kept under lock and key for 50 years.
4 replies →
That's just a fun joke paper deflating some of the more aggressive hype around QC. You shouldn't use it for making security and algorithm adoption decisions.
I don't think many cryptography engineers take Gutmann's paper seriously.
From the paper:
> After our successful factorisation using a dog, we were delighted to learn that scientists have now discovered evidence of quantum entanglement in other species of mammals such as sheep [32]. This would open up an entirely new research field of mammal-based quantum factorisation. We hypothesise that the production of fully entangled sheep is easy, given how hard it can be to disentangle their coats in the first place. The logistics of assembling the tens of thousands of sheep necessary to factorise RSA-2048 numbers is left as an open problem.
The paper is a joke, but Gutmann does make some useful, non-joke suggestions in section 7. There's probably room for a serious, full-length paper on quantum factorization evaluation criteria.
I don't take Gutmann seriously.
> As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms
This is somewhat correct, but needs some nuance.
First, the problem is bigger with signatures, which is why nobody is happy with the current post quantum signature schemes and people are working on better pq signature schemes for the future. But signatures aren't an urgent issue, as there is no "decrypt later" scenario for signatures.
For encryption, the overhead exists, but it isn't too bad. We are already deploying pqcrypto, and nobody seems to have an issue with it. Use a current OpenSSH and you use mlkem. Use a current browser with a server using modern libraries and you also use mlkem. I haven't heard anyone complaining that the Internet got so much slower in recent years due to pqcrypto key exchanges.
Compared to the overall traffic we use commonly these days, the few extra kb during the handshake (everything else is not affected) doesn't matter much.
I imagine the key exchange is just once per connection, right? So the overhead seems not too bad.
Especially since I think a pretty large number of computers/hostnames that are ssh'able today will probably have the same root password if they're still connected to the internet 10-20 years from now
So what person is running an SSH server and configuring it to use post-quantum crypto, but is using password Auth? Priorities are out-of-whack.
Not that this is a bad thing, but first start using keys, then start rotating them regularly and then worry about theoretical future attacks.
1 reply →
root can't normally log in via ssh. Unless the default configuration is changed.
2 replies →
>... which leads to huge overheads in network traffic and of course CPU time.
This is just the key exchange. You're exchanging keys for the symmetric cipher you'll be using for traffic in the session. There's really no overhead to talk about.
Indeed, I'll expand a bit: Asymmetrical crypto has always been incredibly slow compared to symmetrical crypto which is either HW accelerated (AES) or fast on the CPU (ChaCha20).
But since the symmetrical key is the same for both sides you must either share it ahead of time or use asymmetrical crypto to exchange the symmetrical keys to go brrrrr
This still greatly affects connections/second, which is an important metric. Especially since servers don't always like very long lived connections, so you may get plenty of connections during an HTTP interaction.
1 reply →
>As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
Eh? Public-key (asymmetric) cryptography is already very expensive compared to symmetric even under classical, that's normal, what it's used for is the vital but limited operation of key-exchange for AES or whatever fast symmetric algorithm afterwards. My understanding (and serious people in the field please correct me if I'm wrong!) is that the potential cryptographically relevant quantum computer issue threats almost 100% to key exchange, not symmetric encryption. The best theoretical search algorithm vs symmetric is Grover's which offers a square-root speed up, and thus trivially countered if necessary by doubling the key size (ie, 256-bits vs Grovers would offer 128-bits classical equivalent and 512-bits would offer 256-bits, which is already more than enough). The vast super majority of a given SSH session's traffic isn't typically handshakes unless something is quite odd, and you're likely going to have a pretty miserable experience in that case regardless. So even if the initial handshake gets made significantly more expensive it should be pretty irrelevant to network overhead, it still only happens during the initiation of a given session right?
I know I’m asking for too much, but.
The macOS app Secretive [1] stores SSH keys in the Secure Enclave. To make it work, they’ve selected an algorithm supported by the SE, namely ecdsa-sha2-nistp256.
I don’t think SE supports PQ algorithms, but would it be possible to use a “hybrid key” with a combined algorithm like mlkem768×ecdsa-sha2-nistp256, in a way that the ECDSA part is performed by the SE?
[1]: https://github.com/maxgoedjen/secretive
The notice at stake is about key agreements (aka KEX aka Key Exchange), not about the keys themselves.
If you look at http://mdoc.su/o/ssh_config.5#KexAlgorithms and http://bxr.su/o/usr.bin/ssh/kex-names.c#kexalgs, `ecdsa-sha2-nistp256` is not a valid option for the setting (although `ecdh-sha2-nistp256` is).
Ohh, this is distinct from the pubkey algorithms. Looks like I need a refresher on how SSH works then :-)
Thanks!
1 reply →
To comment on the part about what keys Secretive uses, I looked at this recently and I think it looks like the SE will be able to do ML-KEM soon.
https://developer.apple.com/documentation/cryptokit/secureen...
Not totally sure that I'm reading it right, since I've never done MacOS development before, but I'm a big fan of Secretive and use it whenever possible. If I've got it right, maybe Secretive can add PQ support once ML-KEM is out of beta.
ssh-audit [1] should be updated to test for this theoretical algo. I still get an "A" despite fixating on a specific algo and not including the quantus. I'm doing the cha-cha.
[1] - https://www.ssh-audit.com/
Makes sense to get ahead of this. Especially when it’s a pretty trivial key swop.
Which of the two options given is stronger? Presumably the 512 one?
They're not the same, they're completely different:
> Additionally, all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
The 256 one is actually newer than the 512 one, too:
> OpenSSH versions 9.0 and greater support sntrup761x25519-sha512 and versions 9.9 and greater support mlkem768x25519-sha256.
We're nowhere near the point where there's any general concern regarding the sizes of 256 bits or 512 bits for hashes, block sizes, key sizes etc. Currently we don't need to consider the problem as a question of what time is required, because we don't have the electrical energy required to explore even a fraction of an unfathomably smaller 128 bit space. We don't have computers that can ingest such power either. "Relax, guy."
mlkem is a sane default, since it's the construction the rest of the industry is standardizing on.
Did a bit more research and results square with what you said. They both seem solid but NIST and friends seem to have concluded mlkem is the way
Is there a PQC hybrid algorithm available for OpenSSH that is compliant with FIPS 140-3?
FIPS certification is given to an entire "cryptographic module" that includes hardware and software. "FIPS compliant OpenSSH" is therefore a misnomer, you have to certify OpenSSH running on a particular OS on particular hardware.
FIPS compliance does require use of specific algorithms. ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certified.
caveat: IANAFA (I am not a FIPS auditor)
> you have to certify OpenSSH running on a particular OS on particular hardware
Right, but if you use the certified version of OpenSSH, it will only allow you to use certain algorithms.
> ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certifie
ML-KEM is allowed, and SHA-256 is allowed. But AFAIK, x25519 is not, although finding a definitive list is a lot more difficult for 140-3 than it was for 140-3, so I'm not positive. So I don't think (but IANAFA as well) mlkem768x25519-sha256 would be allowed, although I would expect a hybrid that used ECDSA instead of x25519 would probably be ok. But again, IANAFA, and would be happy if I was wrong.
1 reply →
That's great.
I was thinking about whether to move the Terminal-based microblogging / chat app I'm building into this direction.
(Especially after watching several interviews with Paul Durov and listening to what he went through...)
what did he go through? also why would a blog website need ssh?
So which one is better? sntrup761x25519-sha512 or mlkem768x25519-sha256?
MLKEM768 offers better performance and smaller keys, while SNTRUP761 has stronger security assumptions and better resilience against potential cryptanalysis.
NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ). You can use either, but my guess is using sntrup is going to be a little like how GPG used to default to CAST as its cipher.
NTRU Prime was written by Dan Bernstein, who also had a strong hand in the creation of ed25519 elliptic curve keys, and the chacha20-poly1305 AEAD cipher.
https://news.ycombinator.com/item?id=32360533
While Kyber may have been the winning algorithm, there will be great preference in the community for Bernstein's NTRU Prime.
3 replies →
> NTRU Prime (sntrup) is there mostly as a quirk of history (mlkem wasn't available when SSH went down the road of doing PQ).
ML-KEM (originally "CRYSTALS-Kyber") was available, it's just the Tiny/OpenSSH folks decided not to choose that particular algorithm (for reasons beyond my pay grade).
NIST announced their competition in 2016 with the submission deadline being in 2017:
* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
TinySSH added SNTRUP in 2018, with OpenSSH following in 2019/2020:
* https://blog.josefsson.org/2023/05/12/streamlined-ntru-prime...
SSH just happened to pick one of the candidates that NIST decided not to go with.
2 replies →
I am still asking myself when we get pq keys for host and authentication
This is discussed on the page.
I’m happy to see they’re thinking ahead. There no value in disparaging efforts like this as long as the alternatives that provide better security in the future don’t make things worse.
If you need to access a server across a network you don't 100% control, you have to assume your traffic is captured and post-quantum will mean it can be decrypted. Whether that's a concern or not is another matter
This is an extremely import topic and one I'm glad is being brought up. I come from the physical ID and anti-counterfeiting space (think passports, banknotes, etc..) there is A LOT of buzz around this and how it relates to one's digital footprint and identity. We need to think differently about how to approach encryption... math-based cryptography is becoming very vulnerable.
We're building something that even the smartest ai or the fastest quantum computer can't bypass and we need some BADASS hackers...to help us finish it and to pressure test it.
Any takers?? Reach out: cryptiqapp.com (sorry for link but this is legit collaborative and not promotional)
>math-based cryptography is becoming very vulnerable
Can you explain this a bit more?