Comment by Aurornis

5 months ago

> Much of the HN codebase consists of anti-abuse measures that would stop working if people knew about them.

We’ve all heard about how “security through obscurity” isn’t real security, but so many simple anti-abuse measures are very effective as long as their exact mechanism isn’t revealed.

HN’s downvote and flagging mechanisms make for quick cleanup of anything that gets through, without putting undue fatigue on the users.

Things called "security" that don't follow Kerckhoffs's principle aren't security. There are a lot of things adjacent to security, like spam prevention, that sometimes get dumped into the same bucket, but they're not really the same.

Security measures uphold invariants: absent cryptosystem breaks and implementation bugs, nobody is forging a TLS certificate. I need the private key to credibility present my certificate to the public. Hard guarantee, assuming my assumptions hold.

Likewise, if my OS is designed so sandboxed apps can't steal my browser cookies, that's a hard guarantee, modulo bugs. There's an invariant one can specify formally --- and it holds even if the OS source code leaks.

Abuse prevention? DDoS avoidance? Content moderation? EDR? Fuzzy. Best effort. Difficult to verify. That these things are sometimes called security products doesn't erase the distinction between them and systems that make firm guarantees about upholding formal invariants.

HN abuse prevention belongs to the security-adjacent but not real security category. HN's password hashing scheme would fall under the other category.

  • This is simply not true. At the highest levels, security is about distributing costs between attackers and defenders, with defenders having the goal of raising costs past a threshold where attacks are no longer reasonable expenses for any plausible attacker. Obfuscation, done well, can certainly play a role in that. The Blu-ray BD+ scheme is a great case study on this.

    • A definition can't be right or wrong. We're using different definitions of the word "security". What would you call the rigorous invariant-based conceptualization I call "security"?

      2 replies →

> We’ve all heard about how “security through obscurity” isn’t real security

This is something that programmers enjoy repeating but it has never been true in the real world.

  • You can only say that if you have no idea about cryptography. It is definitely true in the real world, but it needs the right context to be relevant.

    It is related to Kerckhoffs principle: "The design of a system should not require secrecy, and compromise of the system should not inconvenience the correspondents"

    This means that all of the security must reside on the key and little or nothing in the method, as methods can be discovered and rendered ineffective if that's not the case. Keep in mind that this is for communication systems where it is certain that the messages will be intercepted by an hostile agent, and we want to prevent this agent to read the messages.

    When implementing modern cryptographic systems, it is very easy to misuse the libraries, or to try to reimplement cryptographic ideas without a deep understanding of the implications, and this leads to systems that are more vulnerable than intended.

    Security by obscurity is the practice of some developers to reinvent cryptography by applying their cleverness to new, unknown cryptosystems. However, to do this correctly, it requires deep mathematical knowledge about finite fields, probability, linguistics, and so on. Most people have not spent the required decades learning this. The end result is that those "clever" systems with novel algorithms are much less secure than the tried and true cryptosystems like AES and SSL. That's why we say "security by obscurity" is bad.

    Now, going back to the main topic: Hacker News is not a cryptographic system where codified messages are going to be intercepted by an hostile actor. Therefore Kerckhoffs principle doesn't apply.