Comment by Animats
5 years ago
Unfortunately, it's the same old story. A fairly trivial buffer overflow programming error in C++ code in the kernel parsing untrusted data, exposed to remote attackers. In fact, this entire exploit uses just a single memory corruption vulnerability to compromise the flagship iPhone 11 Pro device. With just this one issue I was able to defeat all the mitigations in order to remotely gain native code execution and kernel memory read and write.
Yes, same old buffer C/C++ overflow problem. We have mainstream alternatives now. C#. Go. Rust. It's time to move on.
The code where the bug happens is legal C++, but it uses absolutely none of the memory safety improvements which were added to the language in the past... twenty years probably. It's basically C with classes.
If they haven't kept up with the changes in their current language, what makes one think that they would "move on" to the alternatives, two of which aren't even alternatives?
Before they switch to Rust it would be much faster and more efficient to use smart pointers, std::array, std::vector and stop using memcpy.
Note that this code is shipping as a kernel extension, which uses Embedded C++, not standard C++. Notably, things like templates and exceptions are not available. It would be nice if they could work on this instead, but looking at the dyld and Security sources (which has no such limitations, as the run in userspace) I don't have much confidence.
They could still make use of bounds checking, like my own classes did back in the MS-DOS days, when C++ARM was pretty much the only thing available.
Naturally when one writes C in C++ it doesn't help.
2 replies →
As much as I like to bash security critical code written in memory-unsafe languages, I don't think that this is the crux of the problem here.
To me it's that this extremely trivial bug (the heap overflow, let's ignore the rest for now) passed through code review, security review, security audits, fuzzing... Or that Apple didn't have these in place at all. Not sure which option is worse.
We have 30 years of experience showing that ordinary heap overflows are not in fact easy to spot in code review, security review, security audits, and fuzzing. Each of those modalities eliminates a slice of the problem, and some of them --- manual review modalities --- will remove different slices every time they're applied; different test team, different bugs.
To me, this strongly suggests that the problem is in fact memory-unsafe languages, and not general engineering practices.
Apple, by the way, has all the things you're talking about in place, and in spades.
I agree that the problem is memory-unsafe languages.
You can improve the tools, or you can improve the human, and nobody has managed to improve the human despite decades of trying.
OTOH, we don't really have evidence to show that memory safety is effective in kernels/drivers because no memory safe language has ever been deployed at scale for that purpose.
The way I look at it is that relying exclusively on manual review is at best the same as relying on both manual review and a memory safe language.
In practice, the best case and average case rarely line up.
2 replies →
> the problem is in fact memory-unsafe languages, and not general engineering practices.
Languages don't introduce bugs by themselves. Engineers produced those bugs.
I always thought that bugs are the programmers' fault, and not to blame the language. It's like blaming the English language because it allows you to misuse it and manufacture very offensive racial slurs, or to be rude and cruel, and thus we should replace it with another language that doesn't allow to exploit these weaknesses. We won't be able to express ourselves with beautifully (low-level) crafted poems anymore, but that's the price to pay.
1 reply →
Such bugs are extremely difficult to prevent at scale. Even the most talented engineers make such mistakes and programming quality varies significantly even within top engineering teams which are usually comprised of people with different skill sets (+ junior engineers that need training).
Safe languages are the only way forward to drastically reduce the problem. It can’t be guaranteed to be eliminated 100% obviously because there are still escape hatches available, but it will be significantly improved because you can limit the surface area where such bugs can live.
Not sure if this is common for everyone but I find whenever I get assigned for a review for a monster change, I spend over an hour just working out what the change does and if it seems like it will work. There is no way I could spot tiny potential security exploits in 3000 lines of changed code.
2 replies →
Does any software producer do fuzzing on their own product? I have never heard of this being done by software developers. Usually it's done by exploit developers. Of course there are static analysis tools that should uncover a problem like this, and I know that high-reliability embedded software developers use them, but I don't know if the likes of Apple does.
IMO this is huge.
Thankfully, Apple is starting to hire Rust developers as well as AWS.
The tide is changing, one day we will see some Rust code in iOS/macOS so that these issues are a thing of the past.
It seems unlikely that they’ll solve the problems in iOS with Rust.
It seems much more likely that they will use Swift in some form.
Swift seems an unlikely choice for incrementally replacing portions of kernel code.
9 replies →
By the time we all retire in a few decades they'll be a thing of the past, probably.
There's so much low-hanging fruit to pick in that code and switching to Rust is like saying that we should go to Mars to pick fruit instead.
C#? You've got to be joking.
It’s less insane than you might think:
https://en.m.wikipedia.org/wiki/Singularity_(operating_syste...
I agree rust is probably better suited. Or Apple could make their own memory safe language. They’re clearly capable.
That's an interesting experiment, but that's all it is. The project relies on ASM/C/C++ to boot into a microkernel and to interpret and run the C#. But I suppose it would greatly reduce the attack surface of C/C++/ASM code.
I just wonder, for example, how a capable hardware abstraction layer would work in C#, interrupt handling, CPU and IO scheduling, etc.
1 reply →
You mean Swift?
C#, Go, and Java all go in the same category (roughly)—they wouldn't work for kernel code. Rust will be a valid replacement for C++ kernel code in the near future, I'm sure.
Here's a POSIX kernel in Go, written explicitly to prove your point wrong:
https://github.com/mit-pdos/biscuit
https://pdos.csail.mit.edu/projects/biscuit.html
Where do you get that? Those have all been used in kernels, they work.
Also on another front Apple seems to have already enabled device drivers in user space: https://developer.apple.com/system-extensions/
2 replies →
Sure they would, so much that there people doing it right now.
https://www.wildernesslabs.co/
https://labs.f-secure.com/blog/tamago/
https://www.ptc.com/en/products/developer-tools/perc
Writing the majority of a kernel in those languages is certainly possible.
2 replies →
Nope, some people are actually quite serious about it.
https://www.wildernesslabs.co/
C# is GC'd so massive memory hit, and also not a language you can have in a kernel.
Go: GC again, so no go.
Rust: most sane of the examples you've given.
Apple has already started migrating to Swift which is a memory safe language.
However the real reasons Rust and Go aren't feasible is that they're both essentially all-or-nothing, and neither offers even the most basic semblance of ABI compatibility. Their only nod to ABI stability is "use FFI to C" which means your APIs remain unsafe, and doesn't work for non-C languages without all your system APIs having other languages layered on top.
Swift at least lets you replace individual objc classes one at a time, and is ABI stable, but has no C++ interaction.
Swift is far more like C# than Rust in terms of memory management. Sure it uses ARC but arguably that makes it not suitable for kernel level stuff.
xnu is refcounted, its also c++ which isn't swift friendly.
XNU also has ABI stability requirements which rules out rust.
1 reply →
Yes, but what about these huge legacy codebases like the iOS kernel? I assume we will have to deal with this type of vulnerability for years to come...
Could also fire everyone on the C/C++standards bodies and replace them with people willing to add arrays as a first class data type.
I had that argument with the C standards people a decade ago. [1] Consensus was that it would work technically but not politically. The C++ people are too deep into templates to ever get out.
The basic trick for backwards compatibility is that all arrays have sizes, but you get to specify the expression which represents the size and associate it with the array. So you don't need array descriptors and can keep many existing representations.
Also, if you have slices, you rarely need pointer arithmetic. Slices are pointer arithmetic with sane semantics.
I'm tired of seeing decade after decade of C/C++ buffer overflows. It speaks badly of software engineering as a profession.
[1] http://www.animats.com/papers/languages/safearraysforc43.pdf
Or you could be like Jonathan Blow who claims he never has any memory bug issues and so it's not a problem worth solving in his JAI language.
The political aspect is why I suggest the solution is to just up and fire all those guys. More realistically Microsoft, Apple and Linus could just force the issue. Gets added to Visual C/C++, LLVM, and Gnu C as an extention. And then start polluting code bases and API's with it.
6 replies →
I'm not sure what exactly your are trying to say. As far as I can tell, there are indeed safe variants for arrays in the standard - both static and dynamic. People just choose to not use them for some arbitrary reasons.