← Back to context

Comment by dvt

5 years ago

I read the entire thing, and honestly the heap grooming is very interesting, but really that's the boring part -- lots of trial and error, padding memory, etc. Also interesting that linked-lists aren't used by Apple† (and Ian Beer's suggestion that they ought to use them), but that's neither here nor there. Getting kernel memory read/write is also very interesting, albeit (again) a bit tedious. At the end of the day, it all started with this:

> Using two MacOS laptops and enabling AirDrop on both of them I used a kernel debugger to edit the SyncTree TLV sent by one of the laptops, which caused the other one to kernel panic due to an out-of-bounds memmove.

How did this even pass the _smell_ test? How did it get through code reviews and auditing? You're allocating from an untrusted source. It's like memory management 101. I mean, my goodness, it's from a wireless source, at that.

† In this specific scenario, namely the list of `IO80211AWDLPeer`s.

> How did this even pass the _smell_ test?

Because attackers only have to find one place that was unlucky in implementation, and hence defenders are burdened with eliminting every last one of them.

This is why implementing your network protocols in unsafe languages is bad. Testing can just find some bugs, not ensure absence of bugs.

  • If it's not one thing, it's another.

    https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

    Now I know it's deeply comforting to think if you just had "safety" you could write all the code you want with abandon and the computer would tell you if you did it wrong, but this is a sophomoric attitude that you will either abandon when you have the right experiences, or you will go into management where the abject truth in this statement will be used to keep programmer salaries in the gutter, and piss-poor managers in a job. Meanwhile, these "safe" languages will give you nothing but shadows you'll mistake for your own limitations.

    My suggestion is just learn how to write secure code in C. It's an unknown-unknown for you at the moment, so you're going to have to learn how to tackle that sort of thing, but the good news is that (with the right strategy) many unknown-unknowns can be attacked using the same tricks. That means if you do learn how to write secure code in C, then the skills you develop will be transferable to other languages and other domains, and if you still like management, those skills will even be useful there.

    • You can’t just build a better developer when a single mistake is end game. Even if you do everything right you can still run into problems.

      The reason for that is because large projects can’t have only one developer. As soon as you have multiple developers you have a problem. What happens when two developers begin working on the same base commit. Developer A makes a change to remove a contractual behavior that is not relied upon. Developer B makes a change that relies on this contractual behavior. Both changes are correct on their own, could very well pass code review simultaneously, and then both merge without conflicts. And then your last life line is whatever guarantees you have via static analysis, etc. (notably, this could still fail in a memory safe language if there aren’t any safe guards for this particular logic bug. Nothing is a panacea. Having more tools to write safer code, though, can at least help prevent some of these cases.)

      That’s assuming everyone is perfect and has unlimited time to write perfectly sound code always. And it still fails.

      You point to Rust but nobody said it had to be Rust. Still, just because Rust is not a panacea does not mean it has no value. On the contrary, while there has been decades to hone practices for secure C, Rust is a relative newcomer and obviously shows a ton of promise. It and other new memory safe languages are very likely to take a bite out of C usages where security is important. You can embrace this or deny it... but if you think it’s not happening, you should definitely take a look at the writing on the wall, because it’s certainly there. On the other hand, there are also other approaches. I believe seL4 is doing C code with proofs of correct operation. (Admittedly, I do not fully understand what guarantees this gives you and how, but it sounds promising based on descriptions. There could still be bugs in the proofs, but it certainly raises the bar.)

      3 replies →

    • > My suggestion is just learn how to write secure code in C.

      That is a good suggestion to an individual developer. What is your suggestion to a lead developer of a big organisation? Let’s say to the CTO of Apple.

      You can see at that level of abstraction the “make sure every one of your developers know how to write secure code in C and they never slip up” manifestly doesnt work.

      You can fault individuals for bugs up to a certain point, but if we want to make secure systems we have to change how we are making them. To make the whole process resistant to oopsies.

      7 replies →

    • > My suggestion is just learn how to write secure code in C.

      This not good advice. We've been battling with this issue for decades, and it's clearly not going away by trying to be more careful.

    • A another C advocate talking about the mythical safe C code that no one has managed to do in 50 years of CVE database entries.

      The whole point of safe systems language is not to write 100% code free of exploits, rather to minimize it as much as possible.

      Naturally there are still possible exploits, however the attack surface is much smaller when memory corruption, UB (> 200 documented use cases), implicit conversions and unchecked overflows aren't part of every translation unit.

      1 reply →

    • > https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

      Almost all of those are due to code in unsafe blocks. In other words, not safe rust.

      A few are cryptographic errors. No argument there, Rust won't save you from that.

      FWIW Rust does badly need a standardized unsafe-block auditing mechanism. Like "show me all the unsafe blocks in my code or any of the libraries it uses, except the standard library". If that list is too long to read, that's a bug in your project.

      6 replies →

    • > My suggestion is just learn how to write secure code in C

      Decades of evidence demonstrate that this cannot be done. Even world experts introduce vulns. Writing secure code in languages with tons of guardrails is hard. Writing and evolving secure C is impossible at almost any scale.

    • That’s like saying “learn to drive a Formula One car if you want to feel safe driving at 65 miles an hour.” Sure, it works, but it’s impractical and unnecessary for everyone to do this.

      1 reply →

I suspect linked lists are not used because they are notorious for wrecking cache performance.

  • Never head of that, though I don't use C much. Are you referring to the CPU cache?

    • Yeah, linked lists are bad for the data cache since each element is in some totally random area of memory and thus less likely to be in a cache. Whereas for a linear array the data is next to each other and can be cached effectively and accesses can be easily predicted.

      29 replies →

Finding the bug that allowed this exploit took this researcher weeks. QA can't find all defects without somehow testing every conceivable scenario without knowing every conceivable scenario, and code review can only catch defects if at least one reviewer is able to somehow know that specific methods make an exploit possible. Given that the exact code of underlying methods used may not be known to code reviewers, or that a reviewer might simply not know the full potential use cases for new code at the time of review, it is entirely understandable that defects and resulting exploits happen.

This is why researchers like the OP exist. They find exploits and report them to the manufacturer (hopefully) before they can be used. The fact that this is an effective way of protecting us is also why major software companies offer bug/exploit bounties to researchers.

To demand that all possible exploits of this nature never find their way into production builds is to demand perfection from humans. There is too much to know and think about, and definitely too many unknowns about the future, to make such a fantasy possible while still meeting release deadlines. We software developers often have a hard enough time just meeting feature and documentation deadlines, and adding more people just makes organizing your efforts more complex and difficult which then requires even more people until you reach the point that the scope of organizing your development teams is financially impossible.

Apple uses tons of linked lists, I'm not sure why you got the impression that they don't?

  • I was referring specifically to the list of `IO80211AWDLPeer`s the author was reverse-engineering. His assumption was that the `IO80211AWDLPeer`s were in a linked-list type of data structure (which is a pretty sensible guess). In fact, it ended up being more akin to a priority queue:

    > The data structure holding the peers is in fact much more complex than a linked list, it's more like a priority queue with some interesting behaviours when the queue is modified and a distinct lack of safe unlinking and the like.

    I amended my post for clarification, I'm sure Apple uses linked lists all the time :)