Comment by coppsilgold

9 hours ago

No closed-source E2EE client can be truly secure because the ends of e2e are opaque.

Detecting backdoors is only truly feasible with open source software and even then it can difficult.

A backdoor can be a subtle remote code execution "vulnerability" that can only be exploited by the server. If used carefully and it exfiltrates data in expected client-server communications it can be all but impossible to detect. This approach also makes it more likely that almost no insider will even be aware of it, it could be a small patch applied during the build process or to the binary itself (for example, a bound check branch). This is also another reason why reproducible builds are a good idea for open source software.

>Detecting backdoors is only truly feasible with open source software and even then it can difficult.

This is absurd. Detecting backdoors is only truly feasible on binaries, there's no way you can understand compiler behavior well enough to be able to spot hidden backdoors in source code.

With all due respect to Stallman, you can actually study binaries.

The claim Stallman would make (after punishing you for using Open Source instead of Free Software for an hour) is that Closed Software (Proprietary Software) is unjust. but in the context of security, the claim would be limited to Free Software being capable of being secure too.

You may be able to argue that Open Source reduces risk in threat models where the manufacturer is the attacker, but in any other threat model, security is an advantage of closed source. It's automatic obfuscation.

There's a lot of advantages to Free Software, you don't need to make up some.

  • This. Closed source doesn't stop people from finding exploits in the same way that open source doesn't magically make people find them. The Windows kernel is proprietary and closed source, but people constantly find exploits in it anyways. What matters is that there is a large audience that cares about auditing. OTOH if Microsoft really wanted to sneak in a super hard to detect spyware exploit, they probably could - but so could the Linux kernel devs. Some exploits have been openly sitting in the Linux kernel for more than a decade despite everyone being able to audit it in theory. Who's to say they weren't planted by some three letter agency who coerced a developer. Relying on either approach is pointless anyways. IT security is not a single means to all ends. It's a constant struggle between safety and usability at every single level from raw silicon all the way to user-land.

  • It's weird to me that it's 2026 and this is still a controversial argument. Deep, tricky memory corruption exploit development is done on closed-source targets, routinely, and the kind of backdoor/bugdoor people conjure in threads about E2EE are much simpler than those bugs.

    It was a pretty much settled argument 10 years ago, even before the era of LLVM lifters, but post-LLM the standard of care practice is often full recompilation and execution.

  • > in any other threat model, security is an advantage of closed source

    I think there's a lot of historical evidence that doesn't support this position. For instance, Internet Explorer was generally agreed by all to be a much weaker product from a security perspective than its open source competitors (Gecko, WebKit, etc).

    Nobody was defending IE from a security perspective because it was closed source.

  • I was with you until you somehow claimed obfuscation can improve security, against all historical evidence even pre-computers.

    • Obscurity is a delay tactic which raises the time cost associated with an attack. It is true that obscurity is not a security feature, but it is also true that increasing the time cost associated with attacking you is a form of deterrant from attempts. If you are not at the same time also secure in the conventional sense then it is only buying you time until someone puts in the effort to figure out what you are doing and own you. And you better have a plan for when that time comes. But everyone needs time, because bugs happen, and you need that time to fix them before they are exploited.

      The difference between obscurity and a secret (password, key, etc) is the difference between less then a year to figure it out and a year or more to figure it out.

      There is a surprising amount of software out there with obscurity preventing some kind of "abuse" and in my experience these features are not that strong, but it takes someone like me hours to reverse engineer these things, and in many cases I am the first person to do that after years of nobody else bothering.

    • This is a tired trope. Depending exclusively on obfuscation (security by obscurity) is not safe. Maintaining confidentiality of things that could aid in attacks is absolutely a defensive layer and improves your overall security stance.

      I love the Rob Joyce quote that explained why TAO was so successful: "In many cases we know networks better than the people who designed and run them."

    • I think you are conflating:

      Is an unbreakable security mechanism

      with

      Improves security

      anything that complicates an attacker improves security, at least grossly. That said, then there might be counter effects that make it a net loss or net neutral.

  • Expalin how you detect a branched/flaged sendKey (or whatever it would be called) call in the compiled WhatsApp iOS app?

    It could be interleaved in any of the many analytics tools in there too.

    You have to trust the client in E2E encryption. There's literally no way around that. You need to trust the client's OS (and in some cases, other processes) too.

    • >Expalin how you detect a branched/flaged sendKey (or whatever it would be called) call in the compiled WhatsApp iOS app?

      Vastly easier than spotting a clever bugdoor in the source code of said app.

      5 replies →

  • This comment comes across as unnecessarily aggressive and out of nowhere (Stallman?), it's really hard to parse.

    Does this rewording reflect it's meaning?

    "You don't actually need code to evaluate security, you can analyze a binary just as well."

    Because that doesn't sound correct?

    But that's just my first pass, at a high level. Don't wanna overinterpret until I'm on surer ground about what the dispute is. (i.e. don't want to mind read :) )

    Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure", which sounds facile. (presumably something is being written to somewhere for the whole chat thing to work, can't do P2P because someone's app might not be open when you send)

    • https://www.gnu.org/philosophy/free-sw.html

      Whether the original comment knows it or not, Stallman greatly influenced the very definition of Source Code, and the claim being made here is very close to Stallman's freedom to study.

      >"You don't actually need code to evaluate security, you can analyze a binary"

      Correct

      >"just as well"

      No, of course analyzing source code is easier and analyzing binaries is harder. But it's still possible (feasible is the word used by the original comment)

      >Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure",

      I didn't say anything about that? I mean those are valid tactics as part of a wider toolset, but I specifically said binaries, because it maps one to one with the source code. If you can find something in the source code, you can find it in the binary and viceversa. Analyzing file accesses and networks, or runtime analysis of any kind, is going to mostly be orthogonal to source code/binary static analysis, the only difference being whether you have a debug map to source code or to the machine code.

      This is a very central conflict of Free Software, what I want to make clear is that Free Software refuses to study closed source software, not because it is impossible, but because it is unjustly hard. Free Software never claims it is impossible to study closed source software, it claims that source code access is a right, and they prefer rejecting to use closed source software, and thus never need to perform binary analysis.

  • What’s the state of the art of reverse engineering source code from binaries in the age of agentic coding? Seems like something agents should be pretty good at, but haven’t read anything about it.

    • I think there’s a good possibility that the technology that is LLMs could be usefully trained to decode binaries as a sort of squint-and-you-can-see-it translation problem, but I can’t imagine, eg, pre-trained GPT being particularly good at it.

    • I've been working on this, the results are pretty great when using the fancier models. I have successfully had gpt5.2 complete fairly complex matching decompilation projects, but also projects with more flexible requirements.

    • Nothing yet, agents analyze code which is textual.

      The way they analyze binaries now is by using textual interfaces of command tools, and the tools used are mostly the ones supported by Foundation Models at training time, mostly you can't teach it new tools at inference, they must be supported at training. So most providers are focused on the same tools and benchmarking against them, and binary analysis is not in the zeitgeist right now, it's about production more than understanding.

      3 replies →

    • Agents are sort of irrelevant to this discussion, no?

      Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.

      Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.

      1 reply →