← Back to context

Comment by qubex

7 years ago

There's far more risk in software not crashing when it gets malformed or otherwise unexpected input. If an application crashes, it's memory space has been relinquished and its execution process aborted. Yes, something could've been spawned, but... in general crashing when something unexpected comes up is more sensible, desirable behaviour.

(Or am I wrong? I'm not a professional programmer. I'm just reasoning from common sense.)

The bug causing this crash might be exploitable. Think of a classic buffer overflow: if you overflow a buffer with all zeroes or random data, then the return address most likely gets overwritten with garbage that doesn't point to valid code or a mapped address and the process crashes. But if the attacker specially chose the data they put in the buffer, then they could choose to overwrite the return address with a valid memory address and make the process execute the attacker's own code.

If software written in C/C++ crashes and it's not because of a null pointer dereference specifically, then it's realistic to worry about whether it might be because of an exploitable bug (like a buffer overflow, a double-free, etc). One common way for people to try to find exploitable bugs is to script a program to re-run with random input data to figure out which inputs crash it, and then they debug the crashes to see if they're caused by exploitable bugs.

  • Yes I wrote a fuzzer once and was one of the guys that independently discovered the ancient NT 4.0 SP6 ”named pipe” vulnerability. I just tend to think that crashing on unexpected stuff is more sensible than any alternative (a kind of deny-by-default).

    • yes, it is, but I think you’ll agree that, without knowing what particularly defines the unexpected it is hard to tell whether it really is crashing on all unexpected stuff or crashing on most, and running the attacker’s code on other.

      That’s what should make people worried a bit.

      As to fuzzing: given the complexity of the code and the frequency at which bugs are found, I would expect Apple to fuzz their font rendering code 24/7. Do bugs still surface because there are that many, because the whole rendering engine changes that often, because of compiler bugs that do not show up in instrumented code, or because they don’t fuzz it themselves that well?

    • That depends on what is the mechanism that really causes the crash. If the crash on unexpected input is intentional, then all is good. If it is result of some random corruption of something, then you have problem.

      Edit: spelling and grammar

  • The text-segment of the code containing the machine instructions is in read-only memory. You won't be able to overflow a heap variable with the intention of writing to the text-segment of memory without causing a segfault.

You're not wrong. AgentME is correct, that crashes can be exploitable, but it is definitely more dangerous for software to continue after its data is corrupted.

The Erlang programming language, in fact, is built around the idea that as soon as you see data you don't expect, you crash, and an external process will start you back up in a known good state.

Depends on what we mean by crash.

If program gives up and exits on receipt of unexpected input, that can be perceived as a "crash" by the user but it's not exploitable.

If it's crashing because execution suddenly jumped somewhere it shouldn't be, and the OS killed it, that's more worriesome.