← Back to context

Comment by blowski

4 years ago

I was with you until the parsing with memory unsafe languages. Isn’t that exactly the kind of “random security not based on a threat model” type comment you so rightly criticised in the first half of your comment?

I think you must have misunderstood the point the parent comment was trying to make. Memory-safety issues are responsible for a majority of real-world vulnerabilities. They are probably the most prevalent extant threat in the entire software ecosystem.

  • Buffer overflows are common in CVEs because it's the kind of thing programmers are very familiar with. But I'm pretty sure that in terms of real-world exploits things like SQL injection, cross-site scripting, authentication logic bugs, etc, are still far more common. Almost all of those are in bespoke, proprietary software. A Facebook XSS exploit doesn't get a CVE.

  • It may sound like I’m being snarky, but I’m not:

    Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?

    • Yes, but I think that within the context of discussing a memory safety vulnerability in a text messaging app it's reasonable to talk about memory safe parsers, no?

      Beyond that, I've already addressed phishing at our company, it just didn't seem worth pointing out.

    • A fair point, but that's not really a problem with the technology. (And I did hedge with "probably" :-)

Based on the hundreds, perhaps thousands of critical vulnerabilities that are due directly to parsing user input in memory-unsafe languages, usually resulting in remote code execution, how's this for a threat model: attacker can send crafted input that contains machine code that subsequently runs with the privileges of the process parsing the input. That's bad.

The attack surface is the parser. The ability to access it is arbitrary. I can't build a threat model beyond that for any specific case, but in the case of a text messaging app I absolutely expect "attacker can text you" to be in your threat model.

There are very few threat models that a memory unsafe parser does not break.

Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.

  • Then we’re right back in the checklist mentality of “500 things secure apps never do”. I could talk to somebody else and they’d tell me the real threat to worry about is phishing or poor CI/CD or insecure passwords or whatever.

    • There is no "real threat". Definitely phishing is one of the top threats to an organization, left unmitigated. Thankfully, we now have unphishable 2FA, so you can mitigate it. When you choose to prioritize a threat is going to be a call you have to make as the owner of your company's security posture - maybe phishing is above memory safety for you, I can't say.

      What I can say is that parsing untrusted data in C is very risky. I can't say it is more risky than phishing for you, or more risky than anything else. I lack the context to do so.

      That said, a really easy solution might be to just not do that. Just like... don't parse untrusted input in C. If that's hard for you, so be it, again I lack context. But that's my general advice - don't do it.

      1 reply →

I mean, the threat model is that 1. Memory leaks/errors are bad 2. Programmers make those mistakes all the time 3. Using memory safe languages is cheap Therefore, 4. We should use memory safe languages more often