← Back to context

Comment by geocar

5 years ago

If it's not one thing, it's another.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Now I know it's deeply comforting to think if you just had "safety" you could write all the code you want with abandon and the computer would tell you if you did it wrong, but this is a sophomoric attitude that you will either abandon when you have the right experiences, or you will go into management where the abject truth in this statement will be used to keep programmer salaries in the gutter, and piss-poor managers in a job. Meanwhile, these "safe" languages will give you nothing but shadows you'll mistake for your own limitations.

My suggestion is just learn how to write secure code in C. It's an unknown-unknown for you at the moment, so you're going to have to learn how to tackle that sort of thing, but the good news is that (with the right strategy) many unknown-unknowns can be attacked using the same tricks. That means if you do learn how to write secure code in C, then the skills you develop will be transferable to other languages and other domains, and if you still like management, those skills will even be useful there.

You can’t just build a better developer when a single mistake is end game. Even if you do everything right you can still run into problems.

The reason for that is because large projects can’t have only one developer. As soon as you have multiple developers you have a problem. What happens when two developers begin working on the same base commit. Developer A makes a change to remove a contractual behavior that is not relied upon. Developer B makes a change that relies on this contractual behavior. Both changes are correct on their own, could very well pass code review simultaneously, and then both merge without conflicts. And then your last life line is whatever guarantees you have via static analysis, etc. (notably, this could still fail in a memory safe language if there aren’t any safe guards for this particular logic bug. Nothing is a panacea. Having more tools to write safer code, though, can at least help prevent some of these cases.)

That’s assuming everyone is perfect and has unlimited time to write perfectly sound code always. And it still fails.

You point to Rust but nobody said it had to be Rust. Still, just because Rust is not a panacea does not mean it has no value. On the contrary, while there has been decades to hone practices for secure C, Rust is a relative newcomer and obviously shows a ton of promise. It and other new memory safe languages are very likely to take a bite out of C usages where security is important. You can embrace this or deny it... but if you think it’s not happening, you should definitely take a look at the writing on the wall, because it’s certainly there. On the other hand, there are also other approaches. I believe seL4 is doing C code with proofs of correct operation. (Admittedly, I do not fully understand what guarantees this gives you and how, but it sounds promising based on descriptions. There could still be bugs in the proofs, but it certainly raises the bar.)

  • > You can’t just build a better developer when a single mistake is end game.

    Why not?

    Doctors got better with more training and better strategies towards medicine, and that's a game where a single mistake really is "end game".

    • Doctors benefit from better tools and processes too, but this is all wildly beside the point, because my point was not that we can’t build a better programmer, it’s that we can’t just build a better programmer, for the reason that I then went on to outline.

      You bring up a great point, though. Historically, C is often not trusted for software where people’s lives are on the line. Thus bringing up doctors is a great example of how building a better programmer is not good enough. There’s an entire class of “safety critical” programming practices and standards and it was common to prefer a language like Ada that made more bugs and logic errors into compiler errors.

      1 reply →

> My suggestion is just learn how to write secure code in C.

That is a good suggestion to an individual developer. What is your suggestion to a lead developer of a big organisation? Let’s say to the CTO of Apple.

You can see at that level of abstraction the “make sure every one of your developers know how to write secure code in C and they never slip up” manifestly doesnt work.

You can fault individuals for bugs up to a certain point, but if we want to make secure systems we have to change how we are making them. To make the whole process resistant to oopsies.

  • I don't think this is a case of "both sides have a point" when it comes to blaming individuals for vulnerabilities or suggesting they learn to write secure C. We're way past these things.

  • > What is your suggestion to a lead developer of a big organisation? Let’s say to the CTO of Apple.

    He can pay for my advice if he really wants to hear it, but this isn't really about programming at this point because we're talking about business goals which can have all sorts of priorities besides making quality software.

    Do we want software quality? Then we want better programmers.

    > if we want to make secure systems we have to change how we are making them.

    On this we agree, but thicker training wheels just gets more people on bikes; It doesn't make the roads any safer.

    • Better programmers on C can't effecicely eliminate whole classes of the most common fatal security vulnerabilities.

      Of course after we do eliminate these low hanging fruit, we will be left with a pie of the remaining classes of vulnerabilities that looks different, it's like Amdahls law. But that's no excuse to skip past "step 1".

      1 reply →

  • I agree with your main point, just a nitpick about "at that level of abstraction the “make sure every one of your developers know how to write secure code in C and they never slip up” manifestly doesnt work": it kinda worked with Windows post-XPsp2, the amount of security holes fell pretty dramatically in subsequent releases.

    When a company puts security first, they can get results. Unfortunately, security doesn't really sell software like features do, so a true hardened-by-default mindset is impossible in practice. Hence, we need better tools and processes to build features, as you say.

  • Hire more security auditors?

    Apple employees would have access to these symbols and much more debugging info that the OP poster "got lucky due to an accidental share", so the process would be a lot easier for them.

    EDIT: also, required reading on how vulnerabilities are discovered, important types of bugs, is not unreasonable, nor a long read.

  • > every one of your developers

    You don't need "every one of your developers" to write C neither.

> My suggestion is just learn how to write secure code in C.

This not good advice. We've been battling with this issue for decades, and it's clearly not going away by trying to be more careful.

A another C advocate talking about the mythical safe C code that no one has managed to do in 50 years of CVE database entries.

The whole point of safe systems language is not to write 100% code free of exploits, rather to minimize it as much as possible.

Naturally there are still possible exploits, however the attack surface is much smaller when memory corruption, UB (> 200 documented use cases), implicit conversions and unchecked overflows aren't part of every translation unit.

  • I'm just so happy seeing the gp comment downvoted. It gives me hope that that mentality is slowly dying. Maybe in 30 years we rid ourselves of the C/C++ shackles for something like Rust

> https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Almost all of those are due to code in unsafe blocks. In other words, not safe rust.

A few are cryptographic errors. No argument there, Rust won't save you from that.

FWIW Rust does badly need a standardized unsafe-block auditing mechanism. Like "show me all the unsafe blocks in my code or any of the libraries it uses, except the standard library". If that list is too long to read, that's a bug in your project.

  • Related to what you are looking for is https://github.com/rust-secure-code/cargo-geiger which analyzes the dependency tree for unsafe but afaik it doesn't actually show each individual block.

    The readme is quite good.

    • Wow, yeah, that's exactly the technological aspect of what I had in mind.

      I guess all that's left is the socialogical aspect: packages' "geiger" status ought to be treated as being as important as their dependencies. In other words, lib.rs/docs.rs/crates.io ought to display these data in all the sorts of places where they list the dependencies of a package.

      It would also be great if this tool were made a standard part of cargo. I think it's important enough to deserve that status.

      3 replies →

> My suggestion is just learn how to write secure code in C

Decades of evidence demonstrate that this cannot be done. Even world experts introduce vulns. Writing secure code in languages with tons of guardrails is hard. Writing and evolving secure C is impossible at almost any scale.

That’s like saying “learn to drive a Formula One car if you want to feel safe driving at 65 miles an hour.” Sure, it works, but it’s impractical and unnecessary for everyone to do this.

  • Also writing secure C is much harder than driving a Formula One car (as evidenced by the number of competent practicioners of both disciplines in existence).