Comment by austin-cheney
4 days ago
That is wrong. Most digital signatures in use today use certificates trusted through a certificate trust chain. The algorithms are different.
4 days ago
That is wrong. Most digital signatures in use today use certificates trusted through a certificate trust chain. The algorithms are different.
Internally, most signature algorithms use hash functions. RSA-PSS, EdDSA and ML-DSA use them to provide something like randomness, and the security analysis of those signature schemes includes arguments assuming (in some very particular, technical ways) that the hash function outputs "look random".
Classical DSA and ECDSA do not use hash functions this way, but in my opinion they aren't stronger for it: they're basically assuming instead that some other mathematical function "looks random", which seems riskier than assuming that about a hash function. I've heard that the reason for this is to get around Schnorr's patent on doing it with hash functions, which has since expired.
The SHA3 and SHAKE hash functions (underlying e.g. ML-DSA) are explicitly designed to "look random" as well.
There are some signature schemes that try not to make such strong assumptions: in particular SLH-DSA targets properties more like first- and second-preimage resistance, target-collision-resistance, and so on.
All the algorithms you mention are PKI. RSA uses two large prime numbers. I don't see what hash sequences have to do with this at all.
PKI isn't even really about randomness. RSA does use a kind of randomness to generate its large primes, but that is beneficial and not required. The primary consideration is the math to reverse guess a factor of two primes or the square root of a large number, or something else computers currently find cheap to compute in one way but extremely expensive to reverse.
The intro textbook descriptions of cryptographic systems omit a lot of very important details.
When using RSA to sign a message m, in practice you don't send m^d mod N. That would generally be insecure, depending on what kinds of messages your system sends and/or accepts. In practical systems, instead you hash m, and then adjust the hash through a (possibly randomized) process called "padding" to be a value in [0,N). There are different standards for padding, and the better designs use additional hashing.
The security of the system depends in part on the hashed-then-padded message "looking random", i.e. not having structure that can be exploited by an attacker. It turns out to be tricky to formalize what exact randomness property you need, so cryptosystems are often analyzed in the "random oracle model" (ROM) in which the hash function has impossibly strong randomness properties.
It seems that usually, if you use a strong hash function, a scheme that's proved secure in the ROM is secure in real life (or at least it's not the ROM part that breaks); the counterexamples are usually really contrived. This article is about a somewhat-less-contrived, but still not quite realistic, example where something that's secure in the ROM would break due to the ROM being an unrealistic model.
It's not wrong. The only thing preventing me from forging your certificate is my inability to generate a new cert which hashes to the same digest as what is in your cert's signature. I don't actually need the keys if I break the hash.
EDIT2: I'm doing a bad job of explaining this... you obviously need the keypair associated with the cert to initiate connections with it and not trigger MITM alerts. But if you break the hash function, you don't need the private key from the root cert, the verbatim signature from the original cert will appear to be valid when spliced into your forged cert if the hash digest computation on the forged cert is the same.
And checking the validity of a certificate involves checking a signature of... The certificate's hash. If you can break the underlying hash function, then the trust chain is broken.