There was a discussion here a few years ago (https://news.ycombinator.com/item?id=2686580) about memory vulnerabilities in C. Some people tried to argue back then that various protections offered by modern OSs and runtimes, such as address space randomization, and the availability of tools like Valgrind for finding memory access bugs, mitigates this. I really recommend re-reading that discussion.
My opinion, then and now, is that C and other languages without memory checks are unsuitable for writing secure code. Plainly unsuitable. They need to be restricted to writing a small core system, preferably small enough that it can be checked using formal (proof-based) methods, and all the rest, including all application logic, should be written using managed code (such as C#, Java, or whatever - I have no preference).
This vulnerability is the result of yet another missing bound check. It wasn't discovered by Valgrind or some such tool, since it is not normally triggered - it needs to be triggered maliciously or by a testing protocol which is smart enough to look for it (a very difficult thing to do, as I explained on the original thread).
The fact is that no programmer is good enough to write code which is free from such vulnerabilities. Programmers are, after all, trained and skilled in following the logic of their program. But in languages without bounds checks, that logic can fall away as the computer starts reading or executing raw memory, which is no longer connected to specific variables or lines of code in your program. All non-bounds-checked languages expose multiple levels of the computer to the program, and you are kidding yourself if you think you can handle this better than the OpenSSL team.
We can't end all bugs in software, but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm. It has now cost us a two-year window in which 70% of our internet traffic was potentially exposed. It will cost us more before we manage to end it.
From a quick reading of the TLS heartbeat RFC and the patched code, here's my understanding of the cause of the bug.
TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).
In the code that handles TLS heartbeat requests, the payload size is read from the packet controlled by the attacker:
n2s(p, payload);
pl = p;
Here, p is a pointer to the request packet, and payload is the expected length of the payload (read as a 16-bit short integer: this is the origin of the 64K limit per request).
pl is the pointer to the actual payload in the request packet.
Then the response packet is constructed:
/* Enter response type, length and copy payload */
*bp++ = TLS1_HB_RESPONSE;
s2n(payload, bp);
memcpy(bp, pl, payload);
The payload length is stored into the destination packet, and then the payload is copied from the source packet pl to the destination packet bp.
The bug is that the payload length is never actually checked against the size of the request packet. Therefore, the memcpy() can read arbitrary data beyond the storage location of the request by sending an arbitrary payload length (up to 64K) and an undersized payload.
I find it hard to believe that the OpenSSL code does not have any better abstraction for handling streams of bytes; if the packets were represented as a (pointer, length) pair with simple wrapper functions to copy from one stream to another, this bug could have been avoided. C makes this sort of bug easy to write, but careful API design would make it much harder to do by accident.
It is indeed astonishing how simple-minded this bug is. But these bugs come in all levels of complexity, from simple overstuffed buffers to logical ping-pong that hurts your brain when you try to follow it. We need to get rid of them once and for all. If the whole world can't use a certain tool effectively, then the whole world isn't broken; the tool is bad.
I've felt that C makes this code easy to write because it makes doing the right thing hard. What you are describing is just a lot of work in C, compared to a language with something akin to Java's generics, which are in turn an afterthought in the ML family of languages. What we're asking for is not that complicated from a PL standpoint. A generic streams library?
Economics plays an invisible part here. Someone writing a library has a limited amount of time to implement some set of features, and to balance that against other needs, like making the code "clean"/pretty and secure. In this case, pretty code and secure code are akin. Consumers would likewise have to balance out feature needs with how likely the code is going to explode. What it comes down to is that you aren't likely to have secure, stable code in a language that doesn't inherently encourage it.
It starts to be clearer then, that the more modern, "prettier" languages offer material benefits in their efforts to be more elegant.
Thanks for this. How is this reading arbitrary memory locations though? Isn't this always reading what is near the pl? As in, can you really scan the entire process's memory range this way or just a small subset where malloc (or the stack, whichever this is) places pl?
This reminds me of what another programmer told me a long time ago when we were discussing C; "The problem with C is that people make terrible memory managers.". So true.
I agree that this seems like an abstraction for this is missing, but I always have the feeling that what you're doing in covering holes in a leaking dam you might get good at it, but you'll always have leaks.
I have always detested C (also C++) because it's so unreadable... the snippets of code you cite are just so dense ie. a function like n2s() gives pretty much no indication of what it does to a casual reader. Just reading the RFC (it is pretty much written in a C style) gives me the creeps.
The RFC doesn't mention why there has to be a payload, why the payload has to be random size, why they are doing an echo of this payload, why there has to be a padding after the payload. If this data is just a regular C struct like the RFC makes it out to be (I didn't know you could have a struct with a variable size, but apparently the fields are really pointers or it's just a mental model and not a real struct).
Apparently the purpose of the payload is path MTU discovery. Something that is supposed to happen at the IP layer, but I don't know enough about datagram packets. I guess an application may want to know about the MTU as well...
I'm not here to point fingers, I'm just saying C is a nightmare to me and a reason for me to never be involved with system programming or something like drafting RFC's ;-).
But if one can argue that C is a bad choice for writing this stuff, then that is not an isolated thing. "C" is also the language of the RFCs. "C" is also the mindset of the people doing that writing. After all, the language you speak determines how you think. It introduces concepts that become part of your mental models. I could give many examples, but that's not really the point.
And it's about style and what you give attention to. To me, that RFC is a real bad document. It starts to explain requirements to exceptional scenario's (like when the payload is too big) before even having introduced and explained the main concepts and the how and why's.
So while you may argue that this is a C problem and not a protocol problem, it is really all related.
And you may also say, in response to someone blaming these coders, that blame is inappropriate (and it is) because these are volunteers and they are donating their free time to something to find valuable, the whole distribution and burden of responsibility is, naturally, also part of the culture and how people self-organize and so on.
As someone else explained (https://news.ycombinator.com/item?id=7558394) the protocol is real bad but it is the result of more or less political limitations around submitting RFCs for approval. There is no reason for the payload in TLS (but apparently there is in DTLS) but my point is simply this:
If you are doing inelegant design this will spill over into inelegant implementation. And you're bound to end up with flaws.
Rather than trying to isolate the fault here or there, I would say this is a much larger cultural thing to become aware of.
This sort of argument is becoming something of a fashion statement amongst some security people. It's not a strictly wrong argument: writing code in languages that make screwing up easy will invariably result in screwups.
But it's a disingenuous one. It ignores the realities of systems. The reality is that there is currently no widely available memory-safe language that is usable for something like OpenSSL. .NET and Java (and all the languages running on top of them) are not an option, as they are not everywhere and/or are not callable from other languages. Go could be a good candidate, but without proper dynamic linking it cannot serve as a library callable from other languages either. Rust has a lot of promise, but even now it keeps changing every other week, so it will be years before it can even be considered for something like this.
Additionally, although the parsing portions of OpenSSL need not deal with the hardware directly, the crypto portions do. So your memory-safe language needs some first-class escape hatch to unsafe code. A few of them do have this, others not so much.
It's fun to say C is inadequate, but the space it occupies does not have many competitors. That needs to change first.
First, I do realize that rewriting the software stack from the ground up to have only managed code is a huge task. I do think that as an industry, we should set a goal of having at least one server implementation along these lines (where 'set a goal' may mean, say, grants or calls for proposals). Microsoft Research implemented an experimental OS like that, although it probably didn't have all the features a modern OS would need. I don't know if we need a new language, but we do need a huge rethink of the server architecture, and not just a piece-by-piece rewrite, which I think will founder on the interface issues that you mentioned.
Anyway, I am quite realistic about the prospect of my comment having that kind of effect on the industry - I don't suffer from delusions of grandeur. I was aiming the comment more at people who choose C/C++ for no good reason to write a user-level app; that app is nearly certain to have memory use errors, and if it has any network or remote interface, chances are they can be easily exploited. I'd like as many people as possible to understand that they can't expect to avoid such errors, any more than one of the most heavily audited pieces of software avoided them. We have had decades of exploits of this vulnerability, and yet most programmers are oblivious to it, or think only bad programmers are at risk. So just as tptacek goes around telling people not to write their own crypto, I go around telling people - with less authority and effectiveness, unfortunately - not to write C/C++ code unless they really need to.
As for the performance issues forcing OpenSSL to use C, well, we apparently exposed all our secrets in the pursuit of shaving off those cycles. I hope we are happy.
We might be stuck with C for quite a while but then maybe the more interesting question is 'how does this sort of thing get past review?'. It's not hard to imagine how semantic bugs (say, the debian random or even the apple goto bug) can be missed. This one, on the other hand, hits things like 'are the parameters on memcpy sane' or 'is untrusted input sanitized' which you'd think would be on the checklist of a potential reviewer.
"Rust has a lot of promise, but even now it keeps changing every other week..."
A larger problem, in my opinion, is that things like OpenSSL are used (And should be!) from N other languages. As a result, calling into the library requires almost by definition lowest-common denominator interfaces. Which is C.
C code calling into Rust can certainly be done, but I believe it currently prohibits using much of the standard library, which also removes a lot of the benefits.
C++ doesn't, I think, have as much of a problem there, but I'm somewhat skeptical of C++ as a silver bullet in this case.
Why not write the code in C# (for example) and extract it to $SYSTEM_PROGRAMMING_LANGUAGE? It wouldn't be much different than what Xamarin are doing now for creating iOS and Android apps with C#.
>Additionally, although the parsing portions of OpenSSL need not deal with the hardware directly, the crypto portions do. So your memory-safe language needs some first-class escape hatch to unsafe code. A few of them do have this, others not so much.
For the other points there is some debate, but don't most serious languages have a C FFI?
I believe Haskell could be up to the job, but I heard that there were some difficulties in guarding against timing attacks. However those could have just been noise. I know that a functional (I believe and haha) operating system was made in Haskell.
Aren't Operating Systems lower level than OpenSSL?
Other than C there is also C++ and D if you don't want to stray to far from C. The problem with C++ is that even though it is possible to adapt to a memory safe programming style with C++ the concepts are not prevalent in the community.
What you say can easily be disproved, and you are simply asking for too much if you ask for something to be a drop-in replacement for OpenSSL. Some re-architecting is requred simply because of the insecurity of C.
For example, a shared library that implements SSL would have to be a shim for something living in a separate process space.
That is a Haskell implementation of TLS. It is written in a language that has very strong guarantees about mutation, and a very powerful type system which can express complex invariants.
Yes, crypto primitives must be written in a low level language. C is not low level enough to write crypto, neither securely nor fast, so that's not an argument in its favor.
There are several languages that do fill that gap, but security people never use it. For example, Cyclone is pretty good. (http://cyclone.thelanguage.org/).
> C and other languages without memory checks are unsuitable for writing secure code
I vehemently disagree. Well-written C is very easy to audit. Much much moreso than languages like C# and Java, where something I could do with 200 lines in a single C source file requires 5 different classes in 5 different files. The problem with C is that a lot of people don't write it well.
Have you looked at the OpenSSL source? It's an ungodly f-cking disaster: it's very very difficult to understand and audit. THAT, I think, is the problem. BIND, the DNS server, used to have huge security issues all the time. They did a ground-up rewrite for version 9, and that by and large solved the problem: you don't read about BIND vulnerabilities that often anymore.
OpenSSL is the new BIND; and we desperately need it to be fixed.
(If I'm wrong about BIND, please correct me, but AFICS the only non-DOS vulnerability they've had since version 9 is CVE-2008-0122)
> but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm.
If we're playing the blame game, blame the x86 architecture, not the C language. If x86 stacks grew up in memory (that is, from lower to higher addresses), almost all "stack smashing" attacks would be impossible, and a whole lot of big security bugs over the last 20 years could never have happened.
(The SSL bug is not a stack-smashing attack, but several of the exploits leveraged by the Morris worm were)
> The problem with C is that a lot of people don't write it well.
Including people responsible for one of the most important security-related library in the world. No matter how good and careful a programmer is, they are still human and prone to errors. Why not put every chance on our side and use languages (e.g. Rust, Ada, ATS, etc.) that make entire classes of errors impossible? They won't fix all problems, and definitely not those associated with having a bad code base, but it'd still be many times better than hoping people don't screw up with pointers lifetime.
>The problem with C is that a lot of people don't write it well.
There are languages that make it very very hard to write bad code. Haskell is a good example of where if your program type-checks, there's a high chance it's probably correct.
C is a language that doesn't offer many advantages but offers very many disadvantages for its weak assurances. Things like the Haskell compiler show that you can get strong typing for free, and there's no longer many excuses to run around with raw pointers except for legacy code.
Agreed. Simple code is easy to understand and just as easy to find any bugs in. After looking at the heartbeat spec and the code, I can already see a simplification that, had it been written this way, would've likely avoided introducing this bug. Instead of allocating memory of a new length, how about just validating the existing message fields as per the spec:
> The total length of a HeartbeatMessage MUST NOT exceed 2^14 or max_fragment_length when negotiated as defined in [RFC6066].
> The padding_length MUST be at least 16.
> The sender of a HeartbeatMessage MUST use a random padding of at least 16 bytes.
> If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently.
Then if it's all good, modify the buffer to change its type to heartbeat_response, fill the padding with new random bytes, and send this response. No need to copy the payload (which is where the bug was), no need to allocate more memory.
(Now I'm sure someone will try to find a flaw in this approach...)
My favorite is that the Morris worm dates back to late 1988 when MS was starting the development of OS/2 2.0 and NT. Yea, I am talking about the decision to use a flat address space instead of segmented.
That's why I have high hopes for Rust. We really need to move away from C for critical infrastructure. Perhaps C++ as well, though the latter does have more ways to mitigate certain memory issues.
Incidentally, someone on the mailing list brought up the issue of having a compiler flag to disable bounds checking. However, the Rust authors were strictly against it.
I'm excited about Rust for this reason as well, but in practice I find myself thinking a lot about data moving into and out of various C libraries. The great but inevitably imperfect theory is that those call sites are called out explicitly and should be as limited as possible. It works well but isn't a silver bullet. I'm hopeful that as the language ecosystem matures there will be increasingly mature C library wrappers and (even better!) native, memory-safe, Rust replacements for things.
I'd disagree about C++. In my experience, the only things it adds is (1) a false sense of security (since the compiler will flag so many things which are not really big problems, but will happily ignore most overrun issues), (2) lots of complicated ways to screw up, such as not properly allocating/deleting things deep in some templated structure, and (3) interference with checking tools - I got way more false positives from Valgrind in C++ code than in C.
I wish godspeed to Rust and any other language which doesn't expose the raw underlying computer the way C/C++ does, which is IMO insane for application programming.
"The fact is that no programmer is good enough to write code whic is free from such vulnerabilities."
"...you are kidding yourself if you think you can handle this better than the OpenSSL team."
Well, I can think of at least one example that counters this supposition. As someone points out elsewhere in this thread, BIND is like OpenSSL. And others wrote better alternatives, one of which offered a cash reward for any security holes and has afaik never had a major security flaw.
What baffles me is that no matter how bad OpenSSL is shown to be, it will not shake some programmmers' faith in it.
I wonder if the commercial CA's will see a rise in the sale of certificates because of this.
Sloppy programmer blames language for his mistakes. News at 11.
Nothing in the standard prevents a C compiler + tightly coupled malloc implementation from implementing bounds checks. Out-of-bounds operations result in undefined behavior, and crashing the program is a valid response to undefined behavior. If your malloc implementation cooperates, you can even bounds-check pointer arithmetic without violating calling conventions.
It's quite a shame that there isn't a compiler that does this, and it's a project I've considered spending some time on if I can find a big enough block of that to get a solid start.
Unrestricted pointer arithmetic is indeed incompatible with memory safety. You set a pointer to point to one structure, then you change it and it now points to another structure or array. The compiler doesn't know the semantics of your code, so how can it tell if you meant to do that? And malloc/memcpy is way too low to check this stuff. It only sees memory addresses; it has no idea what variables are in them. Tightly coupled would mean passing information like "variable secret_key occupies address such-and-such" into the libc, which does violate POSIX standards, and will result in lots of code breaking. I don't see why we wouldn't just write in C# or Java or Rust, instead of a memory-safe subset of C (and it would have to be a subset).
Edit: here's one project for making a memory-safe C: http://www.seclab.cs.sunysb.edu/mscc/ . Interesting, but (a) it is a subset of C, (b) it doesn't remove all vulnerabilities, and (c) I still don't grok the advantage of using this over a language actually designed for modern, secure application programming.
C language environments that worked like this have been commercially available in the past: Saber-C in the '90s, and perhaps earlier, was one example.
One problem is that the obvious implementation technique is to change the representation of pointers (to include base and bounds information, or a pointer to that), which means that you need to redo a lot of the library as well. (Or convert representations when entering into a stock library routine, and accept that whatever it does with the pointer won't get bounds-checked.)
I implemented this once in my C interpreter picoc. Users hated it because it also prevented them from doing some crazy C memory access tricks, so I ended up taking it out.
If you have a char* buf; block you got from network stack and you have to copy buf[3] bytes from the position buf+15 then the compiler doesn't know what to check for if you don't cross the boundary of that buffer.
"Intel MPX is a set of processor features which, with compiler, runtime library and OS support, brings increased robustness to software by checking pointer references whose compile time normal intentions are usurped at runtime due to buffer overflow."
I think clang's AddressSanitizer gets pretty close to what you want. It misses some tricky cases on use-after-return, but other than that it offers pretty robust memory safety model for bounds checks, double free, and so on.
> This vulnerability is the result of yet another missing bound check. It wasn't discovered by Valgrind or some such tool, since it is not normally triggered - it needs to be triggered maliciously or by a testing protocol which is smart enough to look for it (a very difficult thing to do, as I explained on the original thread).
You could also look at this bug as an input sanitization failure. The author didn't consider what to do when the length field in the header is longer than what comes over the wire (even when writing the code in a secure language, this case should be handled somehow, maybe by logging or dropping the packet).
The defined behaviour would be to discard the packet. In a secure language, the buffer would have had a "length" property, and the code would have crashed when a read beyond the buffer's end was attempted. But in C, buffers are just pointers, so there is fundamentally nothing wrong with reading beyond the end of the buffer. So instead of a crash, we get silent memory exposure.
Isn't this basically the whole point of QuickCheck-like testing frameworks? They're basically a specification that is attempted to be falsified in some way by a fuzzer. I don't see why most C projects couldn't be doing this.
Speaking of proofs, how about we write security critical code in haskell? You need a very simple runtime, but beyond that it would work pretty much wherever.
Most memory-related bugs are automatically eliminated, and security proofs are easier.
Go or Java on top. Coding in C is like juggling chainsaws to say you can juggle them. C is certainly better than old school Fortran where memory management wasn't developed until later, but platforms like Erlang, Go and JRuby are really hard to beat.
The only problem is convincing people to migrate to different tools and transition codebases to another language. It would take a large project like FreeBSD, LLVM or the Linux kernel to move the needle.
Fortran was not meant to be a systems programming language. The fact that it did not have memory management does actually make sense in scientific applications, where you typically know your problem size in advance or can just recompile before a day long computation.
Why port all the security vulns over to Rust? There are already a handful of SSL implementations, it isn't horribly hard to do. Maybe start with http://hackage.haskell.org/package/tls
we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm. It has now cost us a two-year window in which 70% of our internet traffic was potentially exposed. It will cost us more before we manage to end it.
Could one make a new kind of OS where C programs are compiled to some intermediate representation then when run this is JIT compiled within a managed hypervisor sandbox? Could Chrome OS become something like this? Does this already exist? MS had a managed code OS called Singularity.
> My opinion, then and now, is that C and other languages without memory checks are unsuitable for writing secure code.
I think they can be used to write secure code, but it has to be done carefully, with really thorough checks and unit tests, and a constant awareness of the vulnerabilities.
Everything I've heard about OpenSSL so far, suggests it was done by a bunch of cowboys who don't care about code quality. Those people shouldn't be writing C, but a safer language.
However, qmail is written in C and has a very good record. So I would disagree with The fact is that no programmer is good enough to write code which is free from such vulnerabilities.
There seem to be at least two programmers who are capable of that.
This argument came up in the thread from a few years ago. It is quite wrong-headed. I would like to give a clear answer to it:
Virtual machines and runtimes may be vulnerable to malicious CODE. That's bad. Programs written in unmanaged languages are vulnerable to malicious DATA. That's horrible and unmitigatable.
Vulns to malicious code are bad, but they may be mitigated by not running untrusted code (hard, but doable in contexts of high security). They are also mitigated by the fact that the runtime or VM is a small piece of code which may even be amenable to formal verification.
Vulns to malicious data, or malicious connection patterns, are impossible to avoid. You can't accept only trusted data in anything user-facing. Also, these vulnerabilities are spread through billions of lines of application and OS code, as opposed to core runtime/VM.
So reducing the attack surface isn't a laudable goal in your book, because hey the VM itself can have vulnerabilities so there isn't a point? I think the point is that programmers will always make these mistakes and we should limit as much as possible the type of unsafe code that is written to as small an attack vector as possible. You're never going to eliminate vulnerabilities, but we sure can try and reduce the likelihood of them occurring. If there is some objective measurement to be made that says this isn't the case, i.e. the number of JVM vulnerabilities like this outstrip or is on par with client side vulnerabilities that occur in purely C/C++ applications I would love to see it.
Ultimately, I think the better answer will ultimately be a language that inherently provides the primitives for safe memory management but that's low-level and highly peformant, i.e. Rust or something like it.
In keeping with the tradition of bad car analogies, that's like saying "Driving cars with automatic traction control won't make accidents go away, so automatic traction control is pointless".
Languages with bounds checks on array accesses don't solve everything, but that doesn't mean that they don't work. They do remove entire classes of silent failures that can potentially slip through the cracks in C-like languages. VMs aren't needed for this -- most of the strongly typed functional languages, D, Go, Rust, and others all compile down to native machine code.
Careful API design, discipline, and good coding in C can also mitigate this sort of problem manually, although (like most things in C), it's extra work, and needs careful thought to ensure correctness.
VMs generally do not have this type of vulnerability (buffer overrun).
Also, most vulnerabilities in (e.g.) the JVM can only be exploited by running malicious code inside the VM. Here, the attacker is supplying data used by OpenSSL, but is not able to supply arbitrary code.
Agree. This needs a big fat the world is coming to an end stlye of warning.
I've just shut down the webservers running SSL that I can control.
If you are vuln and don't want to build openssl from source and can afford the outage. I'd reccomend to do the same.
OTHERWISE BUILD FROM SOURCE IMMEDIATELY, PATCH, AND GET NEW KEYS!
Let's hope CA's don't get swamped by all the CSR's. Or rather let's hope they do so we see people are doing something...
For me right now these are just my hobby projects. So I don't care if they're down. But I imagine it will be fun tomorrow.
Ok, anyone could assist me on how to update openssl without breaking anything? I've fetched newest sources from openssl.org and compiled them, but "make install" doesn't actually install it, it only got compiled, but issuing "openssl version" still gives me the old version.
What I want to do is to patch it so our webserver uses new version.
Not to sound like a commercial for Cloudflare or anything. But putting your infrastructure behind their services can protect users while they perform their patching. According to their latest blog post http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerab...
[this command generates a private key and server cert and outputs to pem's]
[Note also the key sizes are 4096, you may want 2048. AND I use -sha256, as sha1 is considered too weak nowadays. These certs are valid for 3650 days...10 years]
Since the command overwrites certs/keys in the current directory of the same name as the outfiles...that's it...you're done. Just restart nginx.
If you change a self-signed cert, like above, expect a new warning from the client on the next connection...this is just your new cert being encountered. Click permantly accept..blah blah.
Interestingly, your tool claims our website (SSL-terminated at our ELB instance) is still vulnerable; while this other tool (http://possible.lv/tools/hb) claims we are unaffected.
Another, known unpatched, app is reported to be affected by both tools.
Is it possible that FiloSottile/Hearbleed may report false positives?
From what I've learned, it reports back if it gets something, when it should get nothing.
How vulnerable a specific site is depends on luck. Yahoo must have broken a whole bunch of mirrors because total amateurs can send mail.yahoo.com a certain blob of code and it has a good chance of returning a stranger's password.
This thing has been in the wild for two years. What are the odds it hasn't been systematically abused? And what does this imply?
To me it sounds kind of like finding out the fence in your backyard was cut open two years ago. Except in this case the backyard is two thirds of the internet.
Worse, it's retroactively unfixable: Even doing all this [revoking certs, new secret keys, new certificates] will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption.
So it would be a good idea to change all your passwords to critical services like email and banks, once they have issued new certs and updated their openssl.
That's slightly misleading. Every private key disclosure leads to decryption of past traffic unless forward secrecy is used.
However, if you switch to a fixed version of OpenSSL now, then an attacker cannot retroactively exploit this bug even if they have recorded all your past traffic, because exploiting the bug requires a live connection.
(Of course, this only applies to attackers who did not know about the bug before it was publicly released, so some worry is still justified. I only wanted to point out that the "retroactively unfixable" is a misleading exaggeration.)
Just received an upgrade on Ubuntu 12.04 LTS as well, apt-get clean issued before updating.
EDIT: If you are using DigitalOcean, the update is not yet on their mirrors. Issue 'sudo sed -i "s/mirrors\.digitalocean/archive.ubuntu/g" /etc/apt/sources.list;sudo apt-get clean;sudo apt-get update;sudo apt-get upgrade' to get the patch. Check the comment by 0x0 above ( https://news.ycombinator.com/item?id=7549842 ) to find any services which need restarting.
Basically yes. However, from my experience, package update urgencies are no good indicator of the updates's actual priority.
It's in the +*-security" channels and you're supposed to apply all updates from there.
Node.js sort-of dodged a bullet here. It includes a version of openssl that it links against when building the crypto module (and, I would think, the tls module). Node.js v0.10.26 uses OpenSSL 1.0.1e 11 Feb 2013.
What worries me about this is that the commit that fixes it [0] doesn't include any tests. Is that normal in crypto? If I committed a fix to a show-stopper bug without any tests at my day job I'd feel very amateur.
What a great writeup. Comprehensive without being overly verbose, answers to "what does this mean?" and "does this affect me?", and clear calls to action.
While I'm not happy at having to spend my Monday patching a kajillion machines, I welcome more vulnerability writeups in this vein.
(What I want now is an exploit.c, PoC.py, pwnSSL.rb, etc... but I guess it would be irresponsible to provide that to the script-kiddies of the interwebz right now)
I believe the reason they got access was one of their customers found it and reported it to them, and they reported it to OpenSSL, and then it somehow leaked (either with the OSSL release, or someone else) and then they posted their now-public writeups of it.
That's not correct. One of the individuals who discovered the bug contacted us as a large provider of SSL termination services. We were asked not to further disclose the details until it was officially patched and announced by OpenSSL. The official announcement occurred today after which we put up a post to let our customers know that they were protected.
Oh it's even worse, basically every secret you had in your server processes' RAM was potentially read in real-time by an attacker for the last 2 years.
Honestly, why aren't the formal verification people jumping on this? I keep hearing about automatic code generation from proof systems like Coq and Agda but it's always some toy example like iterative version of fibonacci from the recursive version or something else just as mundane. Wouldn't cryptography be a perfect playground for making new discoveries? At the end of the day all crypto is just number theory and number theory is as formal a system as it gets. Why don't we have formal proofs for correct functionality of OpenSSL? Instead of a thousand eyes looking at pointers and making sure they all point to the right places why don't we formally prove it? I don't mean me but maybe some grad student.
Yes, why doesn't the same thing exist for SSL? The fact that quark was funded by the NSF means that there is interest in actually doing stuff like this.
I think the summary is a bit too sensationalistic in terms of what the actual security implications are:
The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software.
Yes, while that's true, it's not a "read the whole process' memory" vulnerability which would definitely be cause for panic. The details are subtle:
Can attacker access only 64k of the memory?There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
The address space of a process is normally far bigger than 64KB, and while the bug does allow an arbitrary number of 64KB reads, it is important to note that the attacker cannot directly control where that 64KB will come from. If you're lucky, you'll get a whole bunch of keys. If you're unlucky, you might get unencrypted data you sent/received, which you would have anyway. If you're really unlucky, you get 64KB of zero bytes every time.
Then there's also the question of knowing exactly what/where the actual secrets are. Encryption keys (should) look like random data, and there's a lot of other random-looking stuff in crypto libraries' state. Even supposing you know that there is a key, of some type, somewhere in a 64KB block of random-looking data, you still need to find where inside that data the key is, what type of key it is, and more importantly, whose traffic it protects before you can do anything malicious.
Without using any privileged information or credentials we were able steal from ourselves the secret keys
It really helps when looking for keys, if you already know what the keys are.
In other words, while this is a cause for concern, it's not anywhere near "everything is wide open", and that is probably the reason why it has remained undiscovered for so long.
It's not hard to screen what's returned for chunks that look like they could be keys (you know the private key's size by looking at the target's certificate, you know it's not all zeros, etc.) and then simply exhaustively check chunks against their public key.
I just looked at one of my running apache processes, it only has 3MB of heap mapped (looked at /proc/12345/maps). That's not a whole lot of space to hide the keys in.
I agree entirely with your post, and I can't quite understand the hysteria in this thread. The odds of getting a key using this technique are incredibly low to begin with, let alone being able to recognize you have one, and how to correlate it with any useful encrypted data.
Supposing you do hit the lottery and get a key somewhere in your packet, you now have to find the starting byte for it, which means having data to attempt to decrypt it with. However, now you get bit by the fact that you don't have any privileged information or credentials, so you have no idea where decryptable information lives.
Assuming you are even able to intercept some traffic that's encrypted, you now have to try every word-aligned 256B(?) string of data you collected from the server, and hope you can decrypt the data. The amount of storage and processing time for this is already ridiculous, since you have to manually check if the data looks "good" or not.
The odds of all of these things lining up is infinitesimal for anything worth being worried about (banks, credit cards, etc.), so the effort involved far outweighs the payoffs (you only get 1 person's information after all of that). This is especially true when compared with traditional means of collecting this data through more generic viruses and social engineering.
So, while I'll be updating my personal systems, I'm not going to jump on to the "the sky is falling" train just yet, until someone can give a good example of how this could be practically exploited.
I have successfully extracted a key and decrypted traffic in a lab. I'm refining my automatic process. You're forgetting analysis of the runtime layout of OpenSSL in RAM which is quite predictable on machines without defensive measures. I have a 100% success rate extracting memory and about a 20% success rate programmatically extracting the secret key of the server. I'm nearly 100% against a certain version of Apache with standard distribution configuration.
I did this with no formal CS education and about 400 lines of code. I'm an operations engineer, not a security expert. Once I get it 100% and review my situation legally, I'll probably publish what I have.
Now is not the time to be conservative. Efforts to downplay this vulnerability are directly damaging to the Internet's security and, given that you are a single-issue poster, suspicious.
>Supposing you do hit the lottery and get a key somewhere in your packet, you now have to find the starting byte for it, which means having data to attempt to decrypt it with. However, now you get bit by the fact that you don't have any privileged information or credentials, so you have no idea where decryptable information lives.
Login page of any SaaS will be transmitted over SSL and you'll know what it looks like a priori.
I'm very curious to see the change that introduced the bug in the first place. According to the announcement it was introduced in 1.0.1. That's the version that added Heartbeat support, so maybe it was a bug from the beginning.
Probably to make it more clear what you're referring to, and double-check yourself. There are probably components that are 1 byte, 2 bytes, and 16 bytes long. Writing it out makes it clear and eliminates a chance for human error in the sum, more than a magic 19 does. (I guess 16 is pretty magical too, though. At least it's a "round" number, and in context may be a well-known field size of something in the protocol.)
After reading your comment, I started looking back at the packets I got using the script on a site I knew was not patched. Damn.. there are plaintext passwords in there for paypal.
Does SSH (specifically sshd) on major OSes use affected versions of OpenSSL? [answer pulled up from replies below: since sshd doesn't use TLS protocol, it isn't affected by this bug, even if it does use affected OpenSSL versions]
What's the quickest check to see if sshd, or any other listening process, is vulnerable?
(For example, if "lsof | grep ssl" only shows 0.9.8-ish version numbers, is that a good sign?)
The bug is in the handling of the TLS protocol itself (actually, in a little-used extension of TLS, the TLS Record Layer Heartbeat Protocol), and isn't exposed in applications that just use TLS for crypto primitives.
This doesn't sound like "responsible disclosure" to me - how can Codenomicon dump this news when all the major Linux vendors don't have patches ready to go ?
Well someone was able to give Cloudflare a heads up last week [1].
It would have been nice if the package maintainers could have had time to build ready-to-roll solutions with Heartbeat compiled out prior to the official OpenSSL fix.
> Recovery from this bug could benefit if the new version of the OpenSSL would both fix the bug and disable heartbeat temporarily until some future version... If only vulnerable versions of OpenSSL would continue to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible.
This sounds risky to me. I'm afraid attackers would benefit more from this decision than coordinated do-gooders.
That is my concern as well. We are still running CentOS 6.4 which does not have the affected version of OpenSSL, but we terminate SSL at the ELB so if they are affected then are keys are not safe.
The forum thread has just been updated with this reply:
"We can confirm that load balancers using Elastic Load Balancing SSL termination are vulnerable to the Heartbleed Bug (CVE-2014-0160) reported earlier today. We are currently working to mitigate the impact of this issue and will provide further updates."
Rackspace guy here. We have been digging in and it appears that we did have the impacted version of openssl installed but the heartbeat extension was disabled. Regardless, we have updated everything on the Cloud Load Balancer side to 1.0.1g. I will update here if we find anything different.
What are the chances that the NSA is having a field day with this in the 24-48 hours that it will take everyone to respond? Also, is it possible that CA's have been compromised to the point where root certs should not be trusted?
What are the odds that the NSA didn't already know about it? Even if you don't think they would have deliberately monkeywrenched OpenSSL (as they are widely believed to have done with RSA's BSAFE), they certainly have qualified people poring over widely used crypto libraries, looking for missing bounds checks and all manner of other faults --- quite likely with automated tooling.
As to CAs, there have been enough compromises already from other causes that serious crypto geeks like Moxie Marlinspike are trying to change the trust model to minimize the consequences --- see http://tack.io
What's interesting is that RFC 1122 from 1989 warned about problems like these, and gave a very good approach to prevent them from occurring:
At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability
[IP:1]: "Be liberal in what you accept, and conservative in what you send"
Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; [...]
This is too much by at least one order of magnitude.
What's the going price for a crypto-level code review
(I'm not even saying audit) these days?
Is all this code necessary for state-of-the art encryption or
isn't it rather backwards compatibility baggage?
If the latter: how much could be gained by splitting the project
into '-current' and '-not'?
Thanks! So how does this work: Say I have this project and I want
it audited -- would you (or the company/person that you
had in mind) give me an estimate like "I'd need 3 weeks for
25, 5 weeks for 50 or 10 weeks for 95% coverage" or do you simply analyse away for a week (or whatever time I'm willing to pay you) and try to find something?
That cheap? A freelance web/Mobile developer can charge over $5K per week, I find it hard to believe that you could get quality security code review for that price
Great writeup but I guess I'm still a bit confused. As someone responsible for rails servers I can see that I need to update nginx and openssl as soon as packages become available or compile myself. What about keys though? Do I need to get our SSL certs re-issued? regenerate SSH keys? Anything else that I should be doing?
If you're running a vulnerable version of OpenSSL and want to be truly careful, assume your private keys (not just certs) are already compromised. Once new packages are available, you need to update and then re-roll your crypto.
Also, if you're using those keys to protect other secrets like passwords - say, DB credentials or AWS keys stored in an HTTP-hosted Git repo behind - you can't really assume those are safe either.
I don't quite understand how this bug works. I would appreciate any input from someone knowledgeable.
It sounds like the heartbeat code is sending some data in the handshake. That data should be harmless (padding? zeroes?) but the bug results in reading off the end of an array and from whatever other data happens to be there. Someone sniffing the connection can then see those bytes fly by. If they happened to contain private info, game over.
Is that a correct read on the situation? If so, my followup questions are: 1) Why is there any extra data being sent at all beyond a simple command to "heartbeat"? 2) How much data is being leaked here and at what rate? Is it a byte every couple of hours, is it kilobytes per minute, or what?
I am particularly interested in #1, since that's the part I really don't get at the moment. I suspect the answer to #2 will be implied by the answer to #1.
>>> TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).
So, what happens is that the payload comes in as a pointer and a size (up to 64kb). The server then prepares a response and copies the memory block [pointer, pointer+payloadSize] into the request.
The attack happens when the payload is smaller than the payload size passed in the request. This results in the response preparation dumping the memory block [pointer+realPayloadSize, pointer+payloadSize] into the response.
Any data in this block is now exposed to the callee; and could contain any data from the process.
Thanks. That lines up with what I've seen elsewhere too. I think the main thing I was missing was that this is not a sniffing attack, but rather an active attack where you talk to a peer over SSL and basically trick it into sending you some content from its memory.
Can attacker access only 64k of the memory?There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
...so I guess the answer to 2 is only limited by how frequently you can change the heartbeat settings, and how frequently OpenSSL will send a heartbeat packet.
One obvious - if slightly paranoid - answer is that this was a deliberate backdoor. There appears to be a length field specific to the heartbeat packet that's used to determine how much data from the original packet is included in the response, isn't checked against the actual packet length, and allows lengths up to 64k which is unnecessarily generous for the intended purpose but very useful for this attack.
It does take time for these things to be tested and deployed. Regardless of severity of bug, distributions must test packages before sending them out to all their users.
It would be unfortunate if a new package were to be released immediately only to be soon masked/recalled due to unforeseen consequences.
Of note, the Gentoo package was bumped approximately 2 hours after the advisory was published.
Yeah, I haven't seen any new RPMs for RHEL/CentOS/Fedora yet. Kinda concerning, since I'd expect vendors to be given advance notice and the chance to prep updates to coincide with the announcement.
All my RHEL5 boxes are running 0.9.8, though, at least.
One (selfish) question I have is whether this can affect primary key material stored in an HSM. I'm assuming not, but that the session key generated by the HSM would still be susceptible.
Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today.
"If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle." - torProject
Any chance this bug originated with the NSA? It seems like it would fall under their goal of subverting the infrastructure that keeps secrets on the internet. Of course this is exactly why such a goal is a bad idea - an unprotected internet causes widespread damage.
I don't know -- why don't you try reasoning it out since you're the one lobbing the accusation. Upon a very simple review of the code change/patch, one can see this is a relatively new feature, agreed upon and passed by the publicly available IETF, implemented naively.
"Never attribute to malice that which can be adequately explained by incompetence" -- slightly-butchered quote, from someone smarter than me.
It's not an accusation, it's a speculation. I don't have the ability to judge it for myself, i.e. "a simple review of the code change/patch". That's why I put it out there. I don't mind being refuted, but I wish it would be refuted rather than just downvoted blindly.
P.S. I think your quote doesn't capture the situation properly when someone is known to have malicious intent.
I don't think so - while the NSA would dearly like to have the access that this vulnerability would allow, they would dislike even more if anyone could have it. If they're going to insert a backdoor they're going to be damn sure only they have the key.
they did not try to "weaken RSA", as in the RSA algorithm. They paid off and/or infiltrated RSA the corporation. You were not attacked, your posts simply contained wrong information and useless speculation.
Screaming about the NSA every time a security bug comes up is not interesting, productive, insightful, or useful, please stop.
We really need to see some of the big companies take down their services until they've fixed this and call out for every company out there to audit themselves and confirm to users that this is serious and should be checked and that no service should stay online until they've patched their systems. This should get attention beyond just techies. Business as usual is not acceptable since every day that goes by is the opportunity for someone to take advantage of this and get the keys to your service and all past traffic.
I would not be surprised if people at the NSA, GHCQ and most state security services are going into overdrive right now to get access to anything and everything that is vulnerable to this bug.
> I would not be surprised if people at the NSA, GHCQ and most state security services are going into overdrive right now to get access to anything and everything that is vulnerable to this bug.
I assume the NSA has known about this bug for a long time and has been actively exploiting it.
Note: if you use mint.com, it's likely hitting your banks with your login on your behalf today. You'll still want to change those passwords even if you didn't use banking sites during the known vulnerability window.
So, Google and Codenomicon independently found this two-year-old vulnerability at approximately the same time? How does that happen? Are they both looking at the same publicly-shared fuzzing data, or was there a patch that suddenly made it more obvious?
The obvious concern would be that one found it a good while ago, and just didn't bother announcing it until the other team was anyway. I don't believe that's what happened here, but I'm curious what the mechanism actually was.
Is there a way to tell if a third-party site has patched the bug? (Upgraded to 1.0.1g) Not much point in changing your password on that site before the vulnerability is fixed.
All references I see recommend (for 1.0.1-series) to move to 1.0.1g - but the OpenSSL homepage[0] says that 1.0.1g is a Work in Progress. There is a download[1] link for it though. Anybody have definitive answer for what's going on here? It's a little confusing.
I used the OpenSSL library for building a SAML token parser in JBoss (java). All the front end stuff was java and OpenSSL was used for public/private key decryption and validation of SAML tokens and signatures. I'm not sure exactly what an OpenSSL "server" -- it sounds like there is a feature which you can implement (or not) in your webserver to test the SSL/TLS listener.
However, you could -- as I did -- use anything else as your interface for the web. Why would you specifically include a heartbeat for just SSL is beyond me. If a website is up and running, you'll know it with the usual methods, the https codes. You don't need a separate "heartbeat" for telling you that an internal mechanism for processing a protocol is running...do you?
Testing my externally-accessible OpenVPN server revealed that it is indeed vulnerable. I just powered the box off, going to be a long day at work before I can get home and fix it :/
How to build openSSL statically into a source build of Nginx, just finished running this with nginx-1.4.7 and openSSL-1.0.1g and it compiled just fine. You'll have to tweak it to your environment of course.
What popular SSL client software uses the vulnerable OpenSSL? (Any web browsers, for example on popular linuxes? How about 'curl' when connecting to HTTPS sites?)
How would a client be compromised? I mean I guess a malicious server could send these bad heartbeat packets and sniff the keys, but if the server is pwned then your secrets are already revealed, right?
Imagine you've got a script that, among other things, does a 'wget' against some innocent plain HTTP URL. But an attacker intercepts your request, and redirects you to an HTTPS URL of their choosing.
Yes, wget uses OpenSSL, and follows redirects silently by default.
Now that server uses heartbleed to x-ray your client process memory, collecting all sorts of confidential information, including perhaps credentials to other services.
This bug has a lot of nasty, unintuitive permutations and repercussions that will take time to fully grasp.
What I find strange is that I have a VPS setup on Digital Ocean, with Ubuntu LTS + OpenSSL 1.0.1 + a manually compiled Nginx. This combination should have been vulnerable, yet my website is not reported as vulnerable by the tools I tried for detecting the vulnerability.
Maybe DigitalOcean issued a fix without me noticing? I also updated my Ubuntu packages, yet OpenSSL is still at 1.0.1.
Tinfoil-hat time: is it interesting that within hours (?) of public disclosure of the bug, there's a domain, a logo, a full writeup, everything. The paranoid part of me says the nefarious powers-that-be want me us to use the latest version, as though that would further their goals somehow.
Common sense says I'm just being silly. I just wonder.
How feasible would it be to write things like nginx, Apache, web browsers etc. so that they can use both OpenSSL and NSS, where you could choose what to use via config switch? Then it would be easy to "fix" such a bug when it occurs. The probability that both libraries have a vulnerability at the same time is probably very low.
OK well I just updated about 40 servers. Has anyone started working with CAs to reissue SSL certificates signed with a new key? Are they willing to do the reissue for free? In particular I use RapidSSL for most things and Verisign for a few bigger clients who prefer it.
I don’t; but I do not know how I could ever be sure. I’m a generalist sys admin and my knowledge of crypto is limited to the basics. That being said my understanding is that this vulnerability is in the code that creates the sessions not in the certificates themselves. The risk is that my key already was compromised when I was using the vulnerable version. For me this means two things:
1) There is no easy way for me to confirm or deny the CA is fixed short of attempting to exploit them.
2) Even if the CA is not fixed the vulnerability appears to in the routines used for session management not in the SSL certificate itself. While there is cc information and other stuff I would not like to be leaked, the CSR itself only contains my public key not my private key. As long as my servers are patched and I have a SSL cert using a new keypair that I know has not being compromised; I am not sure if the CA's version of openssl maters or not.
I am in no way trying to pretend I am an expert. I am sure there are problems with my analysis but it still feels like its time to be pragmatic and get a fix in place before asking all the what-ifs. Not that those questions should not be asked but it’s a mater of prioritizing.
Would you be somewhat better protected i.e. (not loosing private keys, etc) if your machine sat behind a load balancer ? The memory exposed would be that of the load balancer correct ?
How is it that Google and Hotmail were not vulnerable? Were they using their own implementations of SSL? I would have figured Google would make use of OpenSSL.
it means if you're running a bad version of openssl then someone can dump the entire contents of your ram, including public/private keys, and anything that is in memory such as passwords and even DB connections.
As far as I can tell, openvpn with TLS authentication is vulnerable as it just uses the usual TLS suite. If you use PSKs or the (mis-named?) --tls-auth PSK additional MAC, then you are only owned if one of your own legitimate nodes revealed the PSK (or was coopted into performing this attack) in which case you're already owned.
So, basically, it is the consequence of "quickly adding an implementation" of an extension of the TLS protocol to otherwise mature, more-or-less solid and "slightly" audited (at least by OpenBSD and FreeBSD teams) code base. OK. It happens.
btw, is OpenBSD affected or they did the job well by not blindly adding an unnecessary stuff (extensions) and bumping the versions without auditing the changes?
"goto fail;" doesn't seem that bad now huh.
Lovely how these GNU/Linux freedom fighters were LOLling their asses off earlier, but when it happens to them they sweat themselves and cry for spoon-fed instructions to compile a software package from its sources.
There was a discussion here a few years ago (https://news.ycombinator.com/item?id=2686580) about memory vulnerabilities in C. Some people tried to argue back then that various protections offered by modern OSs and runtimes, such as address space randomization, and the availability of tools like Valgrind for finding memory access bugs, mitigates this. I really recommend re-reading that discussion.
My opinion, then and now, is that C and other languages without memory checks are unsuitable for writing secure code. Plainly unsuitable. They need to be restricted to writing a small core system, preferably small enough that it can be checked using formal (proof-based) methods, and all the rest, including all application logic, should be written using managed code (such as C#, Java, or whatever - I have no preference).
This vulnerability is the result of yet another missing bound check. It wasn't discovered by Valgrind or some such tool, since it is not normally triggered - it needs to be triggered maliciously or by a testing protocol which is smart enough to look for it (a very difficult thing to do, as I explained on the original thread).
The fact is that no programmer is good enough to write code which is free from such vulnerabilities. Programmers are, after all, trained and skilled in following the logic of their program. But in languages without bounds checks, that logic can fall away as the computer starts reading or executing raw memory, which is no longer connected to specific variables or lines of code in your program. All non-bounds-checked languages expose multiple levels of the computer to the program, and you are kidding yourself if you think you can handle this better than the OpenSSL team.
We can't end all bugs in software, but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm. It has now cost us a two-year window in which 70% of our internet traffic was potentially exposed. It will cost us more before we manage to end it.
From a quick reading of the TLS heartbeat RFC and the patched code, here's my understanding of the cause of the bug.
TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).
In the code that handles TLS heartbeat requests, the payload size is read from the packet controlled by the attacker:
Here, p is a pointer to the request packet, and payload is the expected length of the payload (read as a 16-bit short integer: this is the origin of the 64K limit per request).
pl is the pointer to the actual payload in the request packet.
Then the response packet is constructed:
The payload length is stored into the destination packet, and then the payload is copied from the source packet pl to the destination packet bp.
The bug is that the payload length is never actually checked against the size of the request packet. Therefore, the memcpy() can read arbitrary data beyond the storage location of the request by sending an arbitrary payload length (up to 64K) and an undersized payload.
I find it hard to believe that the OpenSSL code does not have any better abstraction for handling streams of bytes; if the packets were represented as a (pointer, length) pair with simple wrapper functions to copy from one stream to another, this bug could have been avoided. C makes this sort of bug easy to write, but careful API design would make it much harder to do by accident.
It is indeed astonishing how simple-minded this bug is. But these bugs come in all levels of complexity, from simple overstuffed buffers to logical ping-pong that hurts your brain when you try to follow it. We need to get rid of them once and for all. If the whole world can't use a certain tool effectively, then the whole world isn't broken; the tool is bad.
7 replies →
I've felt that C makes this code easy to write because it makes doing the right thing hard. What you are describing is just a lot of work in C, compared to a language with something akin to Java's generics, which are in turn an afterthought in the ML family of languages. What we're asking for is not that complicated from a PL standpoint. A generic streams library?
Economics plays an invisible part here. Someone writing a library has a limited amount of time to implement some set of features, and to balance that against other needs, like making the code "clean"/pretty and secure. In this case, pretty code and secure code are akin. Consumers would likewise have to balance out feature needs with how likely the code is going to explode. What it comes down to is that you aren't likely to have secure, stable code in a language that doesn't inherently encourage it.
It starts to be clearer then, that the more modern, "prettier" languages offer material benefits in their efforts to be more elegant.
23 replies →
Thanks for this. How is this reading arbitrary memory locations though? Isn't this always reading what is near the pl? As in, can you really scan the entire process's memory range this way or just a small subset where malloc (or the stack, whichever this is) places pl?
10 replies →
This reminds me of what another programmer told me a long time ago when we were discussing C; "The problem with C is that people make terrible memory managers.". So true.
I agree that this seems like an abstraction for this is missing, but I always have the feeling that what you're doing in covering holes in a leaking dam you might get good at it, but you'll always have leaks.
I have always detested C (also C++) because it's so unreadable... the snippets of code you cite are just so dense ie. a function like n2s() gives pretty much no indication of what it does to a casual reader. Just reading the RFC (it is pretty much written in a C style) gives me the creeps.
The RFC doesn't mention why there has to be a payload, why the payload has to be random size, why they are doing an echo of this payload, why there has to be a padding after the payload. If this data is just a regular C struct like the RFC makes it out to be (I didn't know you could have a struct with a variable size, but apparently the fields are really pointers or it's just a mental model and not a real struct).
Apparently the purpose of the payload is path MTU discovery. Something that is supposed to happen at the IP layer, but I don't know enough about datagram packets. I guess an application may want to know about the MTU as well...
I'm not here to point fingers, I'm just saying C is a nightmare to me and a reason for me to never be involved with system programming or something like drafting RFC's ;-).
But if one can argue that C is a bad choice for writing this stuff, then that is not an isolated thing. "C" is also the language of the RFCs. "C" is also the mindset of the people doing that writing. After all, the language you speak determines how you think. It introduces concepts that become part of your mental models. I could give many examples, but that's not really the point.
And it's about style and what you give attention to. To me, that RFC is a real bad document. It starts to explain requirements to exceptional scenario's (like when the payload is too big) before even having introduced and explained the main concepts and the how and why's.
So while you may argue that this is a C problem and not a protocol problem, it is really all related.
And you may also say, in response to someone blaming these coders, that blame is inappropriate (and it is) because these are volunteers and they are donating their free time to something to find valuable, the whole distribution and burden of responsibility is, naturally, also part of the culture and how people self-organize and so on.
As someone else explained (https://news.ycombinator.com/item?id=7558394) the protocol is real bad but it is the result of more or less political limitations around submitting RFCs for approval. There is no reason for the payload in TLS (but apparently there is in DTLS) but my point is simply this:
If you are doing inelegant design this will spill over into inelegant implementation. And you're bound to end up with flaws.
Rather than trying to isolate the fault here or there, I would say this is a much larger cultural thing to become aware of.
This sort of argument is becoming something of a fashion statement amongst some security people. It's not a strictly wrong argument: writing code in languages that make screwing up easy will invariably result in screwups.
But it's a disingenuous one. It ignores the realities of systems. The reality is that there is currently no widely available memory-safe language that is usable for something like OpenSSL. .NET and Java (and all the languages running on top of them) are not an option, as they are not everywhere and/or are not callable from other languages. Go could be a good candidate, but without proper dynamic linking it cannot serve as a library callable from other languages either. Rust has a lot of promise, but even now it keeps changing every other week, so it will be years before it can even be considered for something like this.
Additionally, although the parsing portions of OpenSSL need not deal with the hardware directly, the crypto portions do. So your memory-safe language needs some first-class escape hatch to unsafe code. A few of them do have this, others not so much.
It's fun to say C is inadequate, but the space it occupies does not have many competitors. That needs to change first.
First, I do realize that rewriting the software stack from the ground up to have only managed code is a huge task. I do think that as an industry, we should set a goal of having at least one server implementation along these lines (where 'set a goal' may mean, say, grants or calls for proposals). Microsoft Research implemented an experimental OS like that, although it probably didn't have all the features a modern OS would need. I don't know if we need a new language, but we do need a huge rethink of the server architecture, and not just a piece-by-piece rewrite, which I think will founder on the interface issues that you mentioned.
Anyway, I am quite realistic about the prospect of my comment having that kind of effect on the industry - I don't suffer from delusions of grandeur. I was aiming the comment more at people who choose C/C++ for no good reason to write a user-level app; that app is nearly certain to have memory use errors, and if it has any network or remote interface, chances are they can be easily exploited. I'd like as many people as possible to understand that they can't expect to avoid such errors, any more than one of the most heavily audited pieces of software avoided them. We have had decades of exploits of this vulnerability, and yet most programmers are oblivious to it, or think only bad programmers are at risk. So just as tptacek goes around telling people not to write their own crypto, I go around telling people - with less authority and effectiveness, unfortunately - not to write C/C++ code unless they really need to.
As for the performance issues forcing OpenSSL to use C, well, we apparently exposed all our secrets in the pursuit of shaving off those cycles. I hope we are happy.
9 replies →
How about Ada? It is time tested! GNU's Ada shares the same backend as GCC so it can be pretty fast. Good enough for DoD. =P
Edit: I say this having used VHDL quite a bit. I appreciate its type strictness and ranges.
7 replies →
We might be stuck with C for quite a while but then maybe the more interesting question is 'how does this sort of thing get past review?'. It's not hard to imagine how semantic bugs (say, the debian random or even the apple goto bug) can be missed. This one, on the other hand, hits things like 'are the parameters on memcpy sane' or 'is untrusted input sanitized' which you'd think would be on the checklist of a potential reviewer.
1 reply →
"Rust has a lot of promise, but even now it keeps changing every other week..."
A larger problem, in my opinion, is that things like OpenSSL are used (And should be!) from N other languages. As a result, calling into the library requires almost by definition lowest-common denominator interfaces. Which is C.
C code calling into Rust can certainly be done, but I believe it currently prohibits using much of the standard library, which also removes a lot of the benefits.
C++ doesn't, I think, have as much of a problem there, but I'm somewhat skeptical of C++ as a silver bullet in this case.
I don't know about Ada and any other options.
[ATS, anyone?]
Why not write the code in C# (for example) and extract it to $SYSTEM_PROGRAMMING_LANGUAGE? It wouldn't be much different than what Xamarin are doing now for creating iOS and Android apps with C#.
10 replies →
>Additionally, although the parsing portions of OpenSSL need not deal with the hardware directly, the crypto portions do. So your memory-safe language needs some first-class escape hatch to unsafe code. A few of them do have this, others not so much.
For the other points there is some debate, but don't most serious languages have a C FFI?
7 replies →
I believe Haskell could be up to the job, but I heard that there were some difficulties in guarding against timing attacks. However those could have just been noise. I know that a functional (I believe and haha) operating system was made in Haskell.
Aren't Operating Systems lower level than OpenSSL?
5 replies →
Other than C there is also C++ and D if you don't want to stray to far from C. The problem with C++ is that even though it is possible to adapt to a memory safe programming style with C++ the concepts are not prevalent in the community.
11 replies →
What about ADA? GNAT looked pretty good a few years back when I was trying to get into that sort of thing.
>This sort of argument is becoming something of a fashion statement amongst some security people.
Just the ones who don't understand how good API designs can work well to solve these problems, don't worry not all of us are like that :)
What you say can easily be disproved, and you are simply asking for too much if you ask for something to be a drop-in replacement for OpenSSL. Some re-architecting is requred simply because of the insecurity of C.
For example, a shared library that implements SSL would have to be a shim for something living in a separate process space.
http://hackage.haskell.org/package/tls
That is a Haskell implementation of TLS. It is written in a language that has very strong guarantees about mutation, and a very powerful type system which can express complex invariants.
Yes, crypto primitives must be written in a low level language. C is not low level enough to write crypto, neither securely nor fast, so that's not an argument in its favor.
There are several languages that do fill that gap, but security people never use it. For example, Cyclone is pretty good. (http://cyclone.thelanguage.org/).
How about D?
C++?
> C and other languages without memory checks are unsuitable for writing secure code
I vehemently disagree. Well-written C is very easy to audit. Much much moreso than languages like C# and Java, where something I could do with 200 lines in a single C source file requires 5 different classes in 5 different files. The problem with C is that a lot of people don't write it well.
Have you looked at the OpenSSL source? It's an ungodly f-cking disaster: it's very very difficult to understand and audit. THAT, I think, is the problem. BIND, the DNS server, used to have huge security issues all the time. They did a ground-up rewrite for version 9, and that by and large solved the problem: you don't read about BIND vulnerabilities that often anymore.
OpenSSL is the new BIND; and we desperately need it to be fixed.
(If I'm wrong about BIND, please correct me, but AFICS the only non-DOS vulnerability they've had since version 9 is CVE-2008-0122)
> but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm.
If we're playing the blame game, blame the x86 architecture, not the C language. If x86 stacks grew up in memory (that is, from lower to higher addresses), almost all "stack smashing" attacks would be impossible, and a whole lot of big security bugs over the last 20 years could never have happened.
(The SSL bug is not a stack-smashing attack, but several of the exploits leveraged by the Morris worm were)
> The problem with C is that a lot of people don't write it well.
Including people responsible for one of the most important security-related library in the world. No matter how good and careful a programmer is, they are still human and prone to errors. Why not put every chance on our side and use languages (e.g. Rust, Ada, ATS, etc.) that make entire classes of errors impossible? They won't fix all problems, and definitely not those associated with having a bad code base, but it'd still be many times better than hoping people don't screw up with pointers lifetime.
8 replies →
>The problem with C is that a lot of people don't write it well.
There are languages that make it very very hard to write bad code. Haskell is a good example of where if your program type-checks, there's a high chance it's probably correct.
C is a language that doesn't offer many advantages but offers very many disadvantages for its weak assurances. Things like the Haskell compiler show that you can get strong typing for free, and there's no longer many excuses to run around with raw pointers except for legacy code.
26 replies →
Agreed. Simple code is easy to understand and just as easy to find any bugs in. After looking at the heartbeat spec and the code, I can already see a simplification that, had it been written this way, would've likely avoided introducing this bug. Instead of allocating memory of a new length, how about just validating the existing message fields as per the spec:
> The total length of a HeartbeatMessage MUST NOT exceed 2^14 or max_fragment_length when negotiated as defined in [RFC6066].
> The padding_length MUST be at least 16.
> The sender of a HeartbeatMessage MUST use a random padding of at least 16 bytes.
> If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently.
Then if it's all good, modify the buffer to change its type to heartbeat_response, fill the padding with new random bytes, and send this response. No need to copy the payload (which is where the bug was), no need to allocate more memory.
(Now I'm sure someone will try to find a flaw in this approach...)
My favorite is that the Morris worm dates back to late 1988 when MS was starting the development of OS/2 2.0 and NT. Yea, I am talking about the decision to use a flat address space instead of segmented.
That's why I have high hopes for Rust. We really need to move away from C for critical infrastructure. Perhaps C++ as well, though the latter does have more ways to mitigate certain memory issues.
Incidentally, someone on the mailing list brought up the issue of having a compiler flag to disable bounds checking. However, the Rust authors were strictly against it.
I'm excited about Rust for this reason as well, but in practice I find myself thinking a lot about data moving into and out of various C libraries. The great but inevitably imperfect theory is that those call sites are called out explicitly and should be as limited as possible. It works well but isn't a silver bullet. I'm hopeful that as the language ecosystem matures there will be increasingly mature C library wrappers and (even better!) native, memory-safe, Rust replacements for things.
4 replies →
I'd disagree about C++. In my experience, the only things it adds is (1) a false sense of security (since the compiler will flag so many things which are not really big problems, but will happily ignore most overrun issues), (2) lots of complicated ways to screw up, such as not properly allocating/deleting things deep in some templated structure, and (3) interference with checking tools - I got way more false positives from Valgrind in C++ code than in C.
I wish godspeed to Rust and any other language which doesn't expose the raw underlying computer the way C/C++ does, which is IMO insane for application programming.
9 replies →
"The fact is that no programmer is good enough to write code whic is free from such vulnerabilities."
"...you are kidding yourself if you think you can handle this better than the OpenSSL team."
Well, I can think of at least one example that counters this supposition. As someone points out elsewhere in this thread, BIND is like OpenSSL. And others wrote better alternatives, one of which offered a cash reward for any security holes and has afaik never had a major security flaw.
What baffles me is that no matter how bad OpenSSL is shown to be, it will not shake some programmmers' faith in it.
I wonder if the commercial CA's will see a rise in the sale of certificates because of this.
Sloppy programmer blames language for his mistakes. News at 11.
Nothing in the standard prevents a C compiler + tightly coupled malloc implementation from implementing bounds checks. Out-of-bounds operations result in undefined behavior, and crashing the program is a valid response to undefined behavior. If your malloc implementation cooperates, you can even bounds-check pointer arithmetic without violating calling conventions.
It's quite a shame that there isn't a compiler that does this, and it's a project I've considered spending some time on if I can find a big enough block of that to get a solid start.
Unrestricted pointer arithmetic is indeed incompatible with memory safety. You set a pointer to point to one structure, then you change it and it now points to another structure or array. The compiler doesn't know the semantics of your code, so how can it tell if you meant to do that? And malloc/memcpy is way too low to check this stuff. It only sees memory addresses; it has no idea what variables are in them. Tightly coupled would mean passing information like "variable secret_key occupies address such-and-such" into the libc, which does violate POSIX standards, and will result in lots of code breaking. I don't see why we wouldn't just write in C# or Java or Rust, instead of a memory-safe subset of C (and it would have to be a subset).
Edit: here's one project for making a memory-safe C: http://www.seclab.cs.sunysb.edu/mscc/ . Interesting, but (a) it is a subset of C, (b) it doesn't remove all vulnerabilities, and (c) I still don't grok the advantage of using this over a language actually designed for modern, secure application programming.
3 replies →
C language environments that worked like this have been commercially available in the past: Saber-C in the '90s, and perhaps earlier, was one example.
One problem is that the obvious implementation technique is to change the representation of pointers (to include base and bounds information, or a pointer to that), which means that you need to redo a lot of the library as well. (Or convert representations when entering into a stock library routine, and accept that whatever it does with the pointer won't get bounds-checked.)
But it's certainly doable.
I implemented this once in my C interpreter picoc. Users hated it because it also prevented them from doing some crazy C memory access tricks, so I ended up taking it out.
If you have a char* buf; block you got from network stack and you have to copy buf[3] bytes from the position buf+15 then the compiler doesn't know what to check for if you don't cross the boundary of that buffer.
Oncoming Intel memory protection extensions: http://software.intel.com/en-us/articles/introduction-to-int...
"Intel MPX is a set of processor features which, with compiler, runtime library and OS support, brings increased robustness to software by checking pointer references whose compile time normal intentions are usurped at runtime due to buffer overflow."
I think clang's AddressSanitizer gets pretty close to what you want. It misses some tricky cases on use-after-return, but other than that it offers pretty robust memory safety model for bounds checks, double free, and so on.
> This vulnerability is the result of yet another missing bound check. It wasn't discovered by Valgrind or some such tool, since it is not normally triggered - it needs to be triggered maliciously or by a testing protocol which is smart enough to look for it (a very difficult thing to do, as I explained on the original thread).
You could also look at this bug as an input sanitization failure. The author didn't consider what to do when the length field in the header is longer than what comes over the wire (even when writing the code in a secure language, this case should be handled somehow, maybe by logging or dropping the packet).
The defined behaviour would be to discard the packet. In a secure language, the buffer would have had a "length" property, and the code would have crashed when a read beyond the buffer's end was attempted. But in C, buffers are just pointers, so there is fundamentally nothing wrong with reading beyond the end of the buffer. So instead of a crash, we get silent memory exposure.
Isn't this basically the whole point of QuickCheck-like testing frameworks? They're basically a specification that is attempted to be falsified in some way by a fuzzer. I don't see why most C projects couldn't be doing this.
1 reply →
Speaking of proofs, how about we write security critical code in haskell? You need a very simple runtime, but beyond that it would work pretty much wherever.
Most memory-related bugs are automatically eliminated, and security proofs are easier.
If you haven't seen it already, check out Cryptol from Gallois: http://corp.galois.com/cryptol/
It's a crypto DSL that I believe is implemented in Haskell (it compiles to Haskell, C, C++ and a few others).
Particularly relevant example: TLS/SSL implementation in Haskell.
http://hackage.haskell.org/package/tls
Virtually all code exposed to the Internet is security critical, however.
Agree a bazillion times.
Go or Java on top. Coding in C is like juggling chainsaws to say you can juggle them. C is certainly better than old school Fortran where memory management wasn't developed until later, but platforms like Erlang, Go and JRuby are really hard to beat.
The only problem is convincing people to migrate to different tools and transition codebases to another language. It would take a large project like FreeBSD, LLVM or the Linux kernel to move the needle.
Fortran was not meant to be a systems programming language. The fact that it did not have memory management does actually make sense in scientific applications, where you typically know your problem size in advance or can just recompile before a day long computation.
Is anyone working on an OpenSSL port in rust, which lacks the memory vulnerabilities of C?
Why port all the security vulns over to Rust? There are already a handful of SSL implementations, it isn't horribly hard to do. Maybe start with http://hackage.haskell.org/package/tls
5 replies →
They are still making breaking changes to the language so I really doubt it.
we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm. It has now cost us a two-year window in which 70% of our internet traffic was potentially exposed. It will cost us more before we manage to end it.
Could one make a new kind of OS where C programs are compiled to some intermediate representation then when run this is JIT compiled within a managed hypervisor sandbox? Could Chrome OS become something like this? Does this already exist? MS had a managed code OS called Singularity.
> My opinion, then and now, is that C and other languages without memory checks are unsuitable for writing secure code.
I think they can be used to write secure code, but it has to be done carefully, with really thorough checks and unit tests, and a constant awareness of the vulnerabilities.
Everything I've heard about OpenSSL so far, suggests it was done by a bunch of cowboys who don't care about code quality. Those people shouldn't be writing C, but a safer language.
I don't think C should be blamed for the HeartBleed bug. Please see http://www.pixelstech.net/article/1397465547-HeartBleed%3A-S...
You make good points.
However, qmail is written in C and has a very good record. So I would disagree with The fact is that no programmer is good enough to write code which is free from such vulnerabilities.
There seem to be at least two programmers who are capable of that.
Java, yes, hmmm.. Oh wait, but Java VM is written in C and is a host to some of the worst web browser zero days we know of.
Fundamentally, I think we're going to have to give up on security and start handing out drivers licenses to anyone who wants to use the internet.
If that would work Virtual Machines and runtimes wouldn't have vulnerabilities.
So uhm. Yeah, that doesn't work either.
Edit: Btw since HN has this obessions with Tarsnap, it's written in C btw. So you should stop obessing about it and downvote me some more.
This argument came up in the thread from a few years ago. It is quite wrong-headed. I would like to give a clear answer to it:
Virtual machines and runtimes may be vulnerable to malicious CODE. That's bad. Programs written in unmanaged languages are vulnerable to malicious DATA. That's horrible and unmitigatable.
Vulns to malicious code are bad, but they may be mitigated by not running untrusted code (hard, but doable in contexts of high security). They are also mitigated by the fact that the runtime or VM is a small piece of code which may even be amenable to formal verification.
Vulns to malicious data, or malicious connection patterns, are impossible to avoid. You can't accept only trusted data in anything user-facing. Also, these vulnerabilities are spread through billions of lines of application and OS code, as opposed to core runtime/VM.
5 replies →
So reducing the attack surface isn't a laudable goal in your book, because hey the VM itself can have vulnerabilities so there isn't a point? I think the point is that programmers will always make these mistakes and we should limit as much as possible the type of unsafe code that is written to as small an attack vector as possible. You're never going to eliminate vulnerabilities, but we sure can try and reduce the likelihood of them occurring. If there is some objective measurement to be made that says this isn't the case, i.e. the number of JVM vulnerabilities like this outstrip or is on par with client side vulnerabilities that occur in purely C/C++ applications I would love to see it.
Ultimately, I think the better answer will ultimately be a language that inherently provides the primitives for safe memory management but that's low-level and highly peformant, i.e. Rust or something like it.
1 reply →
In keeping with the tradition of bad car analogies, that's like saying "Driving cars with automatic traction control won't make accidents go away, so automatic traction control is pointless".
Languages with bounds checks on array accesses don't solve everything, but that doesn't mean that they don't work. They do remove entire classes of silent failures that can potentially slip through the cracks in C-like languages. VMs aren't needed for this -- most of the strongly typed functional languages, D, Go, Rust, and others all compile down to native machine code.
Careful API design, discipline, and good coding in C can also mitigate this sort of problem manually, although (like most things in C), it's extra work, and needs careful thought to ensure correctness.
3 replies →
VMs generally do not have this type of vulnerability (buffer overrun).
Also, most vulnerabilities in (e.g.) the JVM can only be exploited by running malicious code inside the VM. Here, the attacker is supplying data used by OpenSSL, but is not able to supply arbitrary code.
Given the severity of this bug, the UX of the site is failing anyone who isn't a fulltime sysadmin.
Suggestion: big, bold TLDR ("The sky is falling. Check your OpenSSL version right now") with a link on what to do sorted by OS vendor.
Step 1: Here's a command to spit out your OpenSSL version. If it is the following string, go to step 2.
Step 2: Here's how to update your OpenSSL. Here are links to guides on reissuing keys.
Probably OK the whole remediation bit links to a wiki that gets updated as the various vendors push their patches.
Agree. This needs a big fat the world is coming to an end stlye of warning.
I've just shut down the webservers running SSL that I can control. If you are vuln and don't want to build openssl from source and can afford the outage. I'd reccomend to do the same.
OTHERWISE BUILD FROM SOURCE IMMEDIATELY, PATCH, AND GET NEW KEYS!
Let's hope CA's don't get swamped by all the CSR's. Or rather let's hope they do so we see people are doing something...
For me right now these are just my hobby projects. So I don't care if they're down. But I imagine it will be fun tomorrow.
And when it's fixed, get new keys.
Btw: I'm a dev. Not a sysadmin though :P
Edit: Debian is patched. I'm online again \o/
Ok, anyone could assist me on how to update openssl without breaking anything? I've fetched newest sources from openssl.org and compiled them, but "make install" doesn't actually install it, it only got compiled, but issuing "openssl version" still gives me the old version.
What I want to do is to patch it so our webserver uses new version.
4 replies →
Not to sound like a commercial for Cloudflare or anything. But putting your infrastructure behind their services can protect users while they perform their patching. According to their latest blog post http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerab...
On a linux box: [For each set of certs used for each of your public facing sites...]
1. Open a terminal[cd into] /etc/path_to_ssl_certs_folder[per site].
Ex. /etc/ssl/nginx
2. Regen the certs [example nginx mail server]
openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:4096 -keyout mailkey.pem -out mailcert.pem
[this command generates a private key and server cert and outputs to pem's] [Note also the key sizes are 4096, you may want 2048. AND I use -sha256, as sha1 is considered too weak nowadays. These certs are valid for 3650 days...10 years]
Since the command overwrites certs/keys in the current directory of the same name as the outfiles...that's it...you're done. Just restart nginx.
If you change a self-signed cert, like above, expect a new warning from the client on the next connection...this is just your new cert being encountered. Click permantly accept..blah blah.
------------------------------------------------------------------------
On a Windows box:
1. open an admin cmd window and run 'mmc'.
2. Add a new snap-in for Certificates as local machine.
3. Find and 'Disable all purposes for this cert'.
4. Import your new certs from your 3rd party or that you rolled yourself from your enterprise CA.
5. Test new cert.
6. Delete old cert.
[If you run your own CA, you should already know what to do...]
Agreed. They should reorder their headings, first should be What is it? and second should be How to stop it?
On my CentOS boxes I ran 'yum list | grep openssl'
This is the standard command:
11 replies →
I've built a web tester for this bug, find it at
http://filippo.io/Heartbleed/
It actually exploit the bug, since it was quite trivial, and echo some memory.
It's written in Go, no more than 100 lines. I'll release code in some time.
Interestingly, your tool claims our website (SSL-terminated at our ELB instance) is still vulnerable; while this other tool (http://possible.lv/tools/hb) claims we are unaffected.
Another, known unpatched, app is reported to be affected by both tools.
Is it possible that FiloSottile/Hearbleed may report false positives?
From what I've learned, it reports back if it gets something, when it should get nothing.
How vulnerable a specific site is depends on luck. Yahoo must have broken a whole bunch of mirrors because total amateurs can send mail.yahoo.com a certain blob of code and it has a good chance of returning a stranger's password.
My upgraded debian and ubuntu boxes are still reported as vulnerable.... Who's wrong, who's right?
Have you restarted the services linked against openssl?
lsof | grep ssl | grep DEL
1 reply →
Would love to see the code and test it against a rebuilt a patched nginx.
Filippo has hosted it with github.
https://github.com/FiloSottile/Heartbleed
Just run it against it?
1 reply →
It says that the heartbleed.com site itself is vulnerable.
Looks like its fixed
Exactly what I was looking for, thanks! This should be part of the official heartbleed site not hidden away in comments here.
Nice work
This thing has been in the wild for two years. What are the odds it hasn't been systematically abused? And what does this imply?
To me it sounds kind of like finding out the fence in your backyard was cut open two years ago. Except in this case the backyard is two thirds of the internet.
Worse, it's retroactively unfixable: Even doing all this [revoking certs, new secret keys, new certificates] will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption.
So it would be a good idea to change all your passwords to critical services like email and banks, once they have issued new certs and updated their openssl.
Worse, it's retroactively unfixable
That's slightly misleading. Every private key disclosure leads to decryption of past traffic unless forward secrecy is used.
However, if you switch to a fixed version of OpenSSL now, then an attacker cannot retroactively exploit this bug even if they have recorded all your past traffic, because exploiting the bug requires a live connection.
(Of course, this only applies to attackers who did not know about the bug before it was publicly released, so some worry is still justified. I only wanted to point out that the "retroactively unfixable" is a misleading exaggeration.)
1 reply →
Shouldn't Perfect Forward Secrecy protect against exactly this kind of scenario where the server's primary keys are compromised?
3 replies →
How do you suggest going about finding out if a bank updated the OpenSSL version in its DMZ?
1 reply →
Not again. GAH. I just did this after GnutlsGate.
> And what does this imply?
To me, this implies that it's not too easy to exploit, or we would've seen it fixed much sooner.
It's extremely easy to exploit once it is known. The question is simply: Did people know about it and not disclose so they could keep exploiting it?
As of now (21:04 UTC) this isn't fixed in Debian https://security-tracker.debian.org/tracker/CVE-2014-0160 nor Ubuntu http://people.canonical.com/~ubuntu-security/cve/2014/CVE-20...
Got a long night ahead :/
I just installed update openssl_1.0.1e-2+deb7u5 and libssl1.0.0_1.0.1e-2+deb7u5 on debian wheezy, so it seems the fix is now available.
You need to manually restart all processes linking libssl, too.
Something like "lsof -n | grep ssl | grep DEL" can identify processes using the DELeted old version of libssl after apt-get upgrading.
4 replies →
Just saw the following updated when I did an 'apt-get clean; aptitude dist-upgrade' on Debian Wheezy:
libssl1.0.0 openssh-client openssh-server openssl ssh
2 replies →
Just received an upgrade on Ubuntu 12.04 LTS as well, apt-get clean issued before updating.
EDIT: If you are using DigitalOcean, the update is not yet on their mirrors. Issue 'sudo sed -i "s/mirrors\.digitalocean/archive.ubuntu/g" /etc/apt/sources.list;sudo apt-get clean;sudo apt-get update;sudo apt-get upgrade' to get the patch. Check the comment by 0x0 above ( https://news.ycombinator.com/item?id=7549842 ) to find any services which need restarting.
4 replies →
Should the priority on the ubuntu-security page be higher than "Medium"?
Basically yes. However, from my experience, package update urgencies are no good indicator of the updates's actual priority. It's in the +*-security" channels and you're supposed to apply all updates from there.
Thanks for the links. The big thing heartbleed.com is missing is what to do!
Ubuntu 12.04 patch ready https://launchpad.net/ubuntu/+source/openssl/1.0.1-4ubuntu5....
1.0.1e-2+deb7u5 appearing now on security.debian.org.
I just did a apt-get update and apt-get upgrade and I saw upgrades for openssh-client and openssh-server.
OpenSSH != OpenSSL. Those upgrades are for a different vulnerability in OpenSSH.
1 reply →
Just got an openssl upgrade pushed by Ubuntu 12.04 as well.
Node.js sort-of dodged a bullet here. It includes a version of openssl that it links against when building the crypto module (and, I would think, the tls module). Node.js v0.10.26 uses OpenSSL 1.0.1e 11 Feb 2013.
However (in openssl.gyp): https://github.com/joyent/node/blob/master/deps/openssl/open...
It disables the heartbeat with the compile time option due to a workaround for Microsoft's IIS, of all things.
So the affected window for node would have been Sep 11, 2012 to Mar 27, 2013 (based on the commit history).
What worries me about this is that the commit that fixes it [0] doesn't include any tests. Is that normal in crypto? If I committed a fix to a show-stopper bug without any tests at my day job I'd feel very amateur.
[0] http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=...
Sometimes things are time-critical.....
Ah, the old middle-management excuse: "We don't have time to write tests!"
2 replies →
What a great writeup. Comprehensive without being overly verbose, answers to "what does this mean?" and "does this affect me?", and clear calls to action.
While I'm not happy at having to spend my Monday patching a kajillion machines, I welcome more vulnerability writeups in this vein.
Writeup was too long. We need to know the short and sweet of what to fix.
Update to 1.0.1g, redo all crypto. That is, revoke certs and keys and regenerate.
8 replies →
What? ...As soon as the page loads it's right there without having to scroll the page:
http://i.imgur.com/ZwTclan.png
(What I want now is an exploit.c, PoC.py, pwnSSL.rb, etc... but I guess it would be irresponsible to provide that to the script-kiddies of the interwebz right now)
3 replies →
How did Cloudflare get access to this bug a week before it was made public, yet no distro has a package ready?
How's that for responsible disclosure?
I believe the reason they got access was one of their customers found it and reported it to them, and they reported it to OpenSSL, and then it somehow leaked (either with the OSSL release, or someone else) and then they posted their now-public writeups of it.
That's not correct. One of the individuals who discovered the bug contacted us as a large provider of SSL termination services. We were asked not to further disclose the details until it was officially patched and announced by OpenSSL. The official announcement occurred today after which we put up a post to let our customers know that they were protected.
2 replies →
Holy shit. That seems worse than the debian openssl debacle.
If i got that right ALL openssl private keys are now potentially compromised.
I hope vendors push fixes soon, and then I guess I'm busy for a few days regenerating private keys.
Oh it's even worse, basically every secret you had in your server processes' RAM was potentially read in real-time by an attacker for the last 2 years.
Isn't there any memory protection on Linux? Something running as www-data shouldn't be able to read the ssh-server's RAM?
So it's bad, but it's not that bad unless something exposing this bug (webserver with ssl, vpn, or other service) runs as root?
3 replies →
Unless you used forward secrecy, which you should anyway in case of a key compromise. Key compromises can happen in many ways.
Honestly, why aren't the formal verification people jumping on this? I keep hearing about automatic code generation from proof systems like Coq and Agda but it's always some toy example like iterative version of fibonacci from the recursive version or something else just as mundane. Wouldn't cryptography be a perfect playground for making new discoveries? At the end of the day all crypto is just number theory and number theory is as formal a system as it gets. Why don't we have formal proofs for correct functionality of OpenSSL? Instead of a thousand eyes looking at pointers and making sure they all point to the right places why don't we formally prove it? I don't mean me but maybe some grad student.
You may be interested in Quark, which is a browser kernel written using Coq http://goto.ucsd.edu/quark/
Yes, why doesn't the same thing exist for SSL? The fact that quark was funded by the NSF means that there is interest in actually doing stuff like this.
1 reply →
I think the summary is a bit too sensationalistic in terms of what the actual security implications are:
The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software.
Yes, while that's true, it's not a "read the whole process' memory" vulnerability which would definitely be cause for panic. The details are subtle:
Can attacker access only 64k of the memory? There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
The address space of a process is normally far bigger than 64KB, and while the bug does allow an arbitrary number of 64KB reads, it is important to note that the attacker cannot directly control where that 64KB will come from. If you're lucky, you'll get a whole bunch of keys. If you're unlucky, you might get unencrypted data you sent/received, which you would have anyway. If you're really unlucky, you get 64KB of zero bytes every time.
Then there's also the question of knowing exactly what/where the actual secrets are. Encryption keys (should) look like random data, and there's a lot of other random-looking stuff in crypto libraries' state. Even supposing you know that there is a key, of some type, somewhere in a 64KB block of random-looking data, you still need to find where inside that data the key is, what type of key it is, and more importantly, whose traffic it protects before you can do anything malicious.
Without using any privileged information or credentials we were able steal from ourselves the secret keys
It really helps when looking for keys, if you already know what the keys are.
In other words, while this is a cause for concern, it's not anywhere near "everything is wide open", and that is probably the reason why it has remained undiscovered for so long.
Edit: downvotes. Care to explain?
It's not hard to screen what's returned for chunks that look like they could be keys (you know the private key's size by looking at the target's certificate, you know it's not all zeros, etc.) and then simply exhaustively check chunks against their public key.
I just looked at one of my running apache processes, it only has 3MB of heap mapped (looked at /proc/12345/maps). That's not a whole lot of space to hide the keys in.
I agree entirely with your post, and I can't quite understand the hysteria in this thread. The odds of getting a key using this technique are incredibly low to begin with, let alone being able to recognize you have one, and how to correlate it with any useful encrypted data.
Supposing you do hit the lottery and get a key somewhere in your packet, you now have to find the starting byte for it, which means having data to attempt to decrypt it with. However, now you get bit by the fact that you don't have any privileged information or credentials, so you have no idea where decryptable information lives.
Assuming you are even able to intercept some traffic that's encrypted, you now have to try every word-aligned 256B(?) string of data you collected from the server, and hope you can decrypt the data. The amount of storage and processing time for this is already ridiculous, since you have to manually check if the data looks "good" or not.
The odds of all of these things lining up is infinitesimal for anything worth being worried about (banks, credit cards, etc.), so the effort involved far outweighs the payoffs (you only get 1 person's information after all of that). This is especially true when compared with traditional means of collecting this data through more generic viruses and social engineering.
So, while I'll be updating my personal systems, I'm not going to jump on to the "the sky is falling" train just yet, until someone can give a good example of how this could be practically exploited.
I have successfully extracted a key and decrypted traffic in a lab. I'm refining my automatic process. You're forgetting analysis of the runtime layout of OpenSSL in RAM which is quite predictable on machines without defensive measures. I have a 100% success rate extracting memory and about a 20% success rate programmatically extracting the secret key of the server. I'm nearly 100% against a certain version of Apache with standard distribution configuration.
I did this with no formal CS education and about 400 lines of code. I'm an operations engineer, not a security expert. Once I get it 100% and review my situation legally, I'll probably publish what I have.
Now is not the time to be conservative. Efforts to downplay this vulnerability are directly damaging to the Internet's security and, given that you are a single-issue poster, suspicious.
6 replies →
>Supposing you do hit the lottery and get a key somewhere in your packet, you now have to find the starting byte for it, which means having data to attempt to decrypt it with. However, now you get bit by the fact that you don't have any privileged information or credentials, so you have no idea where decryptable information lives.
Login page of any SaaS will be transmitted over SSL and you'll know what it looks like a priori.
https://twitter.com/WarrenGuy/status/453510021930680320
Here's the patch/commit, I don't know why it's not linked form the OpenSSL changelog or heartbleed.com. A suspicious lack of transparency.
http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=...
I'm very curious to see the change that introduced the bug in the first place. According to the announcement it was introduced in 1.0.1. That's the version that added Heartbeat support, so maybe it was a bug from the beginning.
Yes, this has been introduced in the original heartbeat extension commit (01-01-2012):
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=bd69...
if (1 + 2 + 16 > s->s3->rrec.length)
I don't know C well - why write 19 like this?
Probably to make it more clear what you're referring to, and double-check yourself. There are probably components that are 1 byte, 2 bytes, and 16 bytes long. Writing it out makes it clear and eliminates a chance for human error in the sum, more than a magic 19 does. (I guess 16 is pretty magical too, though. At least it's a "round" number, and in context may be a well-known field size of something in the protocol.)
1 reply →
Those numbers probably have some significance. `1` seems to be "heartbeat type" and `2` seems to be "heartbeat length".
1 reply →
Found a Python PoC: http://s3.jspenguin.org/ssltest.py
Edit: and just used it to dump 64K from a known-vulnerable device we control. Got a session cookie. Jeez.
JESUS CHRIST, all sorts of private information. Patch your servers now!
After reading your comment, I started looking back at the packets I got using the script on a site I knew was not patched. Damn.. there are plaintext passwords in there for paypal.
This shit is scary.
1 reply →
Looks like that file was pulled. Here's a mirror on Pastebin:
http://pastebin.com/YsdUXL1F
Works pretty well on openssl.org...
Does SSH (specifically sshd) on major OSes use affected versions of OpenSSL? [answer pulled up from replies below: since sshd doesn't use TLS protocol, it isn't affected by this bug, even if it does use affected OpenSSL versions]
What's the quickest check to see if sshd, or any other listening process, is vulnerable?
(For example, if "lsof | grep ssl" only shows 0.9.8-ish version numbers, is that a good sign?)
The bug is in the handling of the TLS protocol itself (actually, in a little-used extension of TLS, the TLS Record Layer Heartbeat Protocol), and isn't exposed in applications that just use TLS for crypto primitives.
Sooo in layman's terms - we only need to be worrying about HTTPS and not SSH ?
4 replies →
Does sshd only use TLS/OpenSSL "for crypto primitives"? Or not use OpenSSL at all?
4 replies →
Ok, so is TLS Heartbeat accessible in every service that uses TLS?
The big one that comes to mind aside from https is smtp/tls, e.g. port 587
Edit: Apparently a PoC on STARTTLS has already been written, so smtp/tls is definitely vulnerable
This doesn't sound like "responsible disclosure" to me - how can Codenomicon dump this news when all the major Linux vendors don't have patches ready to go ?
Because it was already disclosed the instant the OpenSSL release went out and the fix was public.
Well someone was able to give Cloudflare a heads up last week [1].
It would have been nice if the package maintainers could have had time to build ready-to-roll solutions with Heartbeat compiled out prior to the official OpenSSL fix.
[1] http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerab...
> Recovery from this bug could benefit if the new version of the OpenSSL would both fix the bug and disable heartbeat temporarily until some future version... If only vulnerable versions of OpenSSL would continue to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible.
This sounds risky to me. I'm afraid attackers would benefit more from this decision than coordinated do-gooders.
In addition to that, it obviously disables the TLS heartbeat extension, which would break existing code that uses it.
Does anyone know how Amazon's Elastic Load Balancers are affected? I can't find anything on the AWS site
That is my concern as well. We are still running CentOS 6.4 which does not have the affected version of OpenSSL, but we terminate SSL at the ELB so if they are affected then are keys are not safe.
Edit: I've posted on the support forum, hopefully they get back to us https://forums.aws.amazon.com/thread.jspa?threadID=149690
I opened a support ticket, and Amazon just responded to say that yes, ELBs are vulnerable. I've posted their reply into that thread.
1 reply →
The forum thread has just been updated with this reply:
"We can confirm that load balancers using Elastic Load Balancing SSL termination are vulnerable to the Heartbleed Bug (CVE-2014-0160) reported earlier today. We are currently working to mitigate the impact of this issue and will provide further updates."
Our AWS ELBs were compromised, but an hour or so ago we checked again and they were good. Now to regen the certs...
Likewise, same question for Rackspace's Cloud LBs.
Rackspace guy here. We have been digging in and it appears that we did have the impacted version of openssl installed but the heartbeat extension was disabled. Regardless, we have updated everything on the Cloud Load Balancer side to 1.0.1g. I will update here if we find anything different.
1 reply →
What are the chances that the NSA is having a field day with this in the 24-48 hours that it will take everyone to respond? Also, is it possible that CA's have been compromised to the point where root certs should not be trusted?
What are the odds that the NSA didn't already know about it? Even if you don't think they would have deliberately monkeywrenched OpenSSL (as they are widely believed to have done with RSA's BSAFE), they certainly have qualified people poring over widely used crypto libraries, looking for missing bounds checks and all manner of other faults --- quite likely with automated tooling.
As to CAs, there have been enough compromises already from other causes that serious crypto geeks like Moxie Marlinspike are trying to change the trust model to minimize the consequences --- see http://tack.io
Also the NSA gets advanced notice of bugs like this, so they've likely had it for a week. Enough time to steal the SSL keys from some juicy targets.
What's interesting is that RFC 1122 from 1989 warned about problems like these, and gave a very good approach to prevent them from occurring:
At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability
[IP:1]: "Be liberal in what you accept, and conservative in what you send"
Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; [...]
Over 300.000 LoC:
This is too much by at least one order of magnitude. What's the going price for a crypto-level code review (I'm not even saying audit) these days?
Is all this code necessary for state-of-the art encryption or isn't it rather backwards compatibility baggage? If the latter: how much could be gained by splitting the project into '-current' and '-not'?
The cost of a cryptography code review is about $5-10k per week.
Thanks! So how does this work: Say I have this project and I want it audited -- would you (or the company/person that you had in mind) give me an estimate like "I'd need 3 weeks for 25, 5 weeks for 50 or 10 weeks for 95% coverage" or do you simply analyse away for a week (or whatever time I'm willing to pay you) and try to find something?
2 replies →
That cheap? A freelance web/Mobile developer can charge over $5K per week, I find it hard to believe that you could get quality security code review for that price
Great writeup but I guess I'm still a bit confused. As someone responsible for rails servers I can see that I need to update nginx and openssl as soon as packages become available or compile myself. What about keys though? Do I need to get our SSL certs re-issued? regenerate SSH keys? Anything else that I should be doing?
If you're running a vulnerable version of OpenSSL and want to be truly careful, assume your private keys (not just certs) are already compromised. Once new packages are available, you need to update and then re-roll your crypto.
Also, if you're using those keys to protect other secrets like passwords - say, DB credentials or AWS keys stored in an HTTP-hosted Git repo behind - you can't really assume those are safe either.
Fun times!
I don't quite understand how this bug works. I would appreciate any input from someone knowledgeable.
It sounds like the heartbeat code is sending some data in the handshake. That data should be harmless (padding? zeroes?) but the bug results in reading off the end of an array and from whatever other data happens to be there. Someone sniffing the connection can then see those bytes fly by. If they happened to contain private info, game over.
Is that a correct read on the situation? If so, my followup questions are: 1) Why is there any extra data being sent at all beyond a simple command to "heartbeat"? 2) How much data is being leaked here and at what rate? Is it a byte every couple of hours, is it kilobytes per minute, or what?
I am particularly interested in #1, since that's the part I really don't get at the moment. I suspect the answer to #2 will be implied by the answer to #1.
I'll give it a shot. Quoting a poster above.
>>> TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).
So, what happens is that the payload comes in as a pointer and a size (up to 64kb). The server then prepares a response and copies the memory block [pointer, pointer+payloadSize] into the request.
The attack happens when the payload is smaller than the payload size passed in the request. This results in the response preparation dumping the memory block [pointer+realPayloadSize, pointer+payloadSize] into the response.
Any data in this block is now exposed to the callee; and could contain any data from the process.
Thanks. That lines up with what I've seen elsewhere too. I think the main thing I was missing was that this is not a sniffing attack, but rather an active attack where you talk to a peer over SSL and basically trick it into sending you some content from its memory.
The original article says:
Can attacker access only 64k of the memory? There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
...so I guess the answer to 2 is only limited by how frequently you can change the heartbeat settings, and how frequently OpenSSL will send a heartbeat packet.
From what I understood, an attacker could get 64Kb chunks per one request.
One obvious - if slightly paranoid - answer is that this was a deliberate backdoor. There appears to be a length field specific to the heartbeat packet that's used to determine how much data from the original packet is included in the response, isn't checked against the actual packet length, and allows lengths up to 64k which is unnecessarily generous for the intended purpose but very useful for this attack.
RHEL updates are available: https://rhn.redhat.com/errata/RHSA-2014-0376.html
CentOS updates are available: http://lists.centos.org/pipermail/centos-announce/2014-April...
Fedora updates are available, hitting the mirrors, but you can get it earlier, instructions here: https://lists.fedoraproject.org/pipermail/announce/2014-Apri... https://lists.fedoraproject.org/pipermail/announce/2014-Apri...
A couple more data points:
I'm running Fedora 19 and Arch on my main dev machines/VMs and as of this posting are considered up-to-date. Both are vulnerable:
It does take time for these things to be tested and deployed. Regardless of severity of bug, distributions must test packages before sending them out to all their users.
It would be unfortunate if a new package were to be released immediately only to be soon masked/recalled due to unforeseen consequences.
Of note, the Gentoo package was bumped approximately 2 hours after the advisory was published.
To be clear, the Gentoo package is only in unstable. It hasn't reached stable yet. (https://bugs.gentoo.org/show_bug.cgi?id=507074)
Yeah, I haven't seen any new RPMs for RHEL/CentOS/Fedora yet. Kinda concerning, since I'd expect vendors to be given advance notice and the chance to prep updates to coincide with the announcement.
All my RHEL5 boxes are running 0.9.8, though, at least.
I've built RPMs for 1.0.1g for CentOS 6. Based of 1.0.1e source rpms. https://www.dropbox.com/sh/7s1fiuvfwma16ra/iSz3Jfh1o-
RHEL6 update announcement
https://rhn.redhat.com/errata/RHSA-2014-0376.html
Likewise for Ubuntu 13.10: OpenSSL 1.0.1e 11 Feb 2013
And the current beta of 14.04: OpenSSL 1.0.1f 6 Jan 2014
The Arch package is available in Testing. https://www.archlinux.org/packages/testing/i686/openssl/
F20 and F19 updates are on their way to the updates repo.
https://admin.fedoraproject.org/updates/openssl-1.0.1e-37.fc...
https://admin.fedoraproject.org/updates/openssl-1.0.1e-37.fc...
apt-get update && apt-get -t testing install openssl yields OpenSSL 1.0.1f on Debian sigh
Ubuntu (and I suppose Debian too), just released a fix in 13.10.
Not affected directly on Mac OS:
Unless you installed the macports version, which is 1.0.1f
2 replies →
Remember that checking services for the OpenSSL heartbleed vulnerability without permission is actually illegal in many countries (UK in particular).
Whoa, this seems horrifying.
One (selfish) question I have is whether this can affect primary key material stored in an HSM. I'm assuming not, but that the session key generated by the HSM would still be susceptible.
If you are using a HSM your long-term authenticity key won't be in the memory space of the process with openssl inside it. So that should be OK.
However, everything else in that process (like, all the traffic you were hoping to protect) is basically toast.
Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today.
"If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle." - torProject
From the CloudFlare blog: "This bug fix is a successful example of what is called responsible disclosure".
I just discovered this now and
Yields 1.0.1e as available package which is vulnerable. I guess not all "stakeholders" have been warned properly - or am I jumping to conclusions?
Apparently Red Hat, Debian, and Ubuntu weren't (from what I gather from reading mailing list posts) -- no idea who else.
That's not responsible at all, IMO. Whoever was in charge of this (NCSC-FI?) isn't very good at coordinating.
https://access.redhat.com/security/cve/CVE-2014-0160
https://bugzilla.redhat.com/show_bug.cgi?id=1084875
1 reply →
Note that distributions usually don't change the library version, they just apply the fix. Look for distribution-specific sub-version.
"Is there a bright side to all this?"
"Yes, we can sell you our software!"
Any chance this bug originated with the NSA? It seems like it would fall under their goal of subverting the infrastructure that keeps secrets on the internet. Of course this is exactly why such a goal is a bad idea - an unprotected internet causes widespread damage.
I don't know -- why don't you try reasoning it out since you're the one lobbing the accusation. Upon a very simple review of the code change/patch, one can see this is a relatively new feature, agreed upon and passed by the publicly available IETF, implemented naively.
"Never attribute to malice that which can be adequately explained by incompetence" -- slightly-butchered quote, from someone smarter than me.
It's not an accusation, it's a speculation. I don't have the ability to judge it for myself, i.e. "a simple review of the code change/patch". That's why I put it out there. I don't mind being refuted, but I wish it would be refuted rather than just downvoted blindly.
P.S. I think your quote doesn't capture the situation properly when someone is known to have malicious intent.
I don't think so - while the NSA would dearly like to have the access that this vulnerability would allow, they would dislike even more if anyone could have it. If they're going to insert a backdoor they're going to be damn sure only they have the key.
I knew I was going to be attacked for saying this, but isn't it a real possibility? We already know that they tried to weaken RSA.
they did not try to "weaken RSA", as in the RSA algorithm. They paid off and/or infiltrated RSA the corporation. You were not attacked, your posts simply contained wrong information and useless speculation.
Screaming about the NSA every time a security bug comes up is not interesting, productive, insightful, or useful, please stop.
It is evil to make totally unsupported accusations, even against the NSA. I've downvoted you, twice.
3 replies →
We really need to see some of the big companies take down their services until they've fixed this and call out for every company out there to audit themselves and confirm to users that this is serious and should be checked and that no service should stay online until they've patched their systems. This should get attention beyond just techies. Business as usual is not acceptable since every day that goes by is the opportunity for someone to take advantage of this and get the keys to your service and all past traffic.
I would not be surprised if people at the NSA, GHCQ and most state security services are going into overdrive right now to get access to anything and everything that is vulnerable to this bug.
> I would not be surprised if people at the NSA, GHCQ and most state security services are going into overdrive right now to get access to anything and everything that is vulnerable to this bug.
I assume the NSA has known about this bug for a long time and has been actively exploiting it.
Snort IDS rules to detect abuse can be found here: http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-liv...
Note: if you use mint.com, it's likely hitting your banks with your login on your behalf today. You'll still want to change those passwords even if you didn't use banking sites during the known vulnerability window.
The "known vulnerability window" is over 2 years.
The window in which the vulnerability was publicly known.
I'm trying to come up with a personal security model that doesn't end with me living in a cabin in the woods.
2 replies →
Is it a problem for those using ssh keys on github ?
No, SSH is not (directly) affected.
No, but your github.com password may be compromised.
You'll need to replace them ASAP, once github updated their version of ssh . But if they run on 0.9.8 branch, you don't have to worry.
Answers in sibling threads suggest ssh/sshd is not affected, as ssh uses its own protocol other than TLS.
So, Google and Codenomicon independently found this two-year-old vulnerability at approximately the same time? How does that happen? Are they both looking at the same publicly-shared fuzzing data, or was there a patch that suddenly made it more obvious?
The obvious concern would be that one found it a good while ago, and just didn't bother announcing it until the other team was anyway. I don't believe that's what happened here, but I'm curious what the mechanism actually was.
Is there a way to tell if a third-party site has patched the bug? (Upgraded to 1.0.1g) Not much point in changing your password on that site before the vulnerability is fixed.
Someone wrote this: http://filippo.io/Heartbleed/
echo -e "quit\n" | openssl s_client -connect <HOSTNAME>:443 -tlsextdebug 2>&1| [ "` grep -c 'TLS server extension \"heartbeat\" (id=15), len=1'`" -gt 0 ] && echo 'Vulnerable'
That can false-positive, for what it's worth, in servers with fixed TLS heartbeats (instead of removing them).
All references I see recommend (for 1.0.1-series) to move to 1.0.1g - but the OpenSSL homepage[0] says that 1.0.1g is a Work in Progress. There is a download[1] link for it though. Anybody have definitive answer for what's going on here? It's a little confusing.
How widely implemented is certificate revocation?
I used the OpenSSL library for building a SAML token parser in JBoss (java). All the front end stuff was java and OpenSSL was used for public/private key decryption and validation of SAML tokens and signatures. I'm not sure exactly what an OpenSSL "server" -- it sounds like there is a feature which you can implement (or not) in your webserver to test the SSL/TLS listener.
However, you could -- as I did -- use anything else as your interface for the web. Why would you specifically include a heartbeat for just SSL is beyond me. If a website is up and running, you'll know it with the usual methods, the https codes. You don't need a separate "heartbeat" for telling you that an internal mechanism for processing a protocol is running...do you?
Are people going straight to buying new domain names for every TLS bug discovered these days?
I'd be surprised if heartbleed.com was still available in 2014
It was - registered 2 days ago by Marko Laakso from Codenomicon, the guys credited (by themselves it seems !) with finding the bug:
http://www.networksolutions.com/whois/results.jsp?domain=hea...
2 replies →
rapidbleed! search rapidshare for "enc"
http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=getaccountde....
accountid=46048788 firstname=mandeep lastname=sihag servertime=1397038309 addtime=1359871506 username=heavenlybeast directstart=1 country=IN mailflags=n language=en jsconfig= email=heavenlybeast@live.com curfiles=36 curspace=1213591844 rapids=0 billeduntil=0 nortuntil=0 maxspacegb=10 additionalspacegb=0 maxdaytrafficmb=100 additionaldaytrafficmb=0 traffictoday=20511350 accounttype=0 valid=1 payabo=0 promocode=0 promotype=0 promovaliduntil=0 maxfilesize=300000000
Has anybody seen or created a PoC for this yet?
UPDATE: I've found one! Shouts to the venerable FiloSottile!
https://github.com/FiloSottile/Heartbleed/blob/master/bleed/...
It seems that this is likely to impact OpenVPN too, since it uses TLS - https://openvpn.net/index.php/open-source/337-why-openvpn-us...
Using a tls-auth key may help mitigate this (especially if you use UDP) since it should stop anything reaching the TLS handshake layer. https://openvpn.net/index.php/open-source/documentation/howt...
Testing my externally-accessible OpenVPN server revealed that it is indeed vulnerable. I just powered the box off, going to be a long day at work before I can get home and fix it :/
heartbleed.com itself is still using a vulnerable OpenSSL, according to http://filippo.io/Heartbleed/#heartbleed.com
It's been patched since, and is no longer vulnerable.
I've posted this link in a separate article but I think it is more useful here. https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx_conf...
How to build openSSL statically into a source build of Nginx, just finished running this with nginx-1.4.7 and openSSL-1.0.1g and it compiled just fine. You'll have to tweak it to your environment of course.
Is openssl in nginx statically linked by default? If so: ouch.
What popular SSL client software uses the vulnerable OpenSSL? (Any web browsers, for example on popular linuxes? How about 'curl' when connecting to HTTPS sites?)
Web browsers all by default use other crypto libraries. (Chromium can be linked to OpenSSL, some distros may ship this — I haven't looked.)
Email clients may be more vulnerable — Thunderbird doesn't, Mail.app doesn't, but I'm unaware what most use.
Sidenote, OS X machines, by default, are not affected by this bug.
$ openssl version -a OpenSSL 0.9.8y 5 Feb 2013
How would a client be compromised? I mean I guess a malicious server could send these bad heartbeat packets and sniff the keys, but if the server is pwned then your secrets are already revealed, right?
Imagine you've got a script that, among other things, does a 'wget' against some innocent plain HTTP URL. But an attacker intercepts your request, and redirects you to an HTTPS URL of their choosing.
Yes, wget uses OpenSSL, and follows redirects silently by default.
Now that server uses heartbleed to x-ray your client process memory, collecting all sorts of confidential information, including perhaps credentials to other services.
This bug has a lot of nasty, unintuitive permutations and repercussions that will take time to fully grasp.
Some Facebook servers are vulnerable for the Heartbleed bug: http://pastebin.com/dmYYpx2y
Android versions 4.1 and higher seem to be vulnerable (check the openssl.version file for every version in https://android.googlesource.com/platform/external/openssl.g... and compare with the vulnerable versions listed on http://heartbleed.com/).
I looked at the 4.4 (Kitkat) source code and it seems to me that the HEARTBEAT is disabled. https://android.googlesource.com/platform/external/openssl.g... contains -DOPENSSL_NO_HEARTBEATS
I am also unclear whether Dalvik or ART use OpenSSL for TLS connections.
It seems that Android is in fact not vulnerable: https://twitter.com/agl__/status/453472368589942785
A lot of doomsayers here but I'm running a service which could just as well be http. https is only there for show. Why do I need to upgrade?
If your process space is public, you don't.
There's probably the potential for segfaults, though, since the code might read past all allocations.
What I find strange is that I have a VPS setup on Digital Ocean, with Ubuntu LTS + OpenSSL 1.0.1 + a manually compiled Nginx. This combination should have been vulnerable, yet my website is not reported as vulnerable by the tools I tried for detecting the vulnerability.
Maybe DigitalOcean issued a fix without me noticing? I also updated my Ubuntu packages, yet OpenSSL is still at 1.0.1.
Tinfoil-hat time: is it interesting that within hours (?) of public disclosure of the bug, there's a domain, a logo, a full writeup, everything. The paranoid part of me says the nefarious powers-that-be want me us to use the latest version, as though that would further their goals somehow.
Common sense says I'm just being silly. I just wonder.
A close reading of this thread shows that CloudFlare(?) knew about the vulnerability for a week. It's been known for at least that long
How feasible would it be to write things like nginx, Apache, web browsers etc. so that they can use both OpenSSL and NSS, where you could choose what to use via config switch? Then it would be easy to "fix" such a bug when it occurs. The probability that both libraries have a vulnerability at the same time is probably very low.
Can some people who are smarter than me give us the flags we would like to compile this with manually?
says how to on the official notice: http://www.openssl.org/news/secadv_20140407.txt
Figured there might be more than just that flag you'd want to compile with.
1 reply →
OK well I just updated about 40 servers. Has anyone started working with CAs to reissue SSL certificates signed with a new key? Are they willing to do the reissue for free? In particular I use RapidSSL for most things and Verisign for a few bigger clients who prefer it.
How do you know your CA isn't vulnerable?
I don’t; but I do not know how I could ever be sure. I’m a generalist sys admin and my knowledge of crypto is limited to the basics. That being said my understanding is that this vulnerability is in the code that creates the sessions not in the certificates themselves. The risk is that my key already was compromised when I was using the vulnerable version. For me this means two things:
1) There is no easy way for me to confirm or deny the CA is fixed short of attempting to exploit them.
2) Even if the CA is not fixed the vulnerability appears to in the routines used for session management not in the SSL certificate itself. While there is cc information and other stuff I would not like to be leaked, the CSR itself only contains my public key not my private key. As long as my servers are patched and I have a SSL cert using a new keypair that I know has not being compromised; I am not sure if the CA's version of openssl maters or not.
I am in no way trying to pretend I am an expert. I am sure there are problems with my analysis but it still feels like its time to be pragmatic and get a fix in place before asking all the what-ifs. Not that those questions should not be asked but it’s a mater of prioritizing.
So now that NSA can steal private keys, all the logs they collected over the years can be decrypted?
I believe that everyone should at least consider donating to the openssl software foundation: https://www.openssl.org/support/donations.html
In case it is useful to anyone: here's my notes on rebuilding RPMs for Fedora 18: https://gist.github.com/dahjelle/10151097
Heroku is working on it, but as of 07:02 UTC (30 mins ago) they have not released a fix: https://status.heroku.com/incidents/606
Since their own (status.heroku.com and heroku.com) certs are from 2013-10-03, this illustrates a bad situation post-heartbleed:
Were they using a 1.0.1* vulnerable OpenSSL, or not? or did they (unlikely but possible) not adequately fix the issue.
This is information only the service provider has, and thus poses a dilemma (in terms of transparency at least).
Here's hoping for the best.
Would you be somewhat better protected i.e. (not loosing private keys, etc) if your machine sat behind a load balancer ? The memory exposed would be that of the load balancer correct ?
Depends on if the LB was doing the SSL termination (offload).
But still, the private keys are at risk. There are worse scenarios, but just barely.
You were using [EC]DHE cipher suites, weren't you?
Its only a development environment so my risk is fairly low, However I was just curious, its an Amazon ELB.
There are open support tickets in both Heroku and AWS about the impact of this bug but no answers yet.
I hope folks will promote a warning if either platform is effected both on HackerNews and twitter.
Got links?
Here is an online tool to check if a site is affected by it: http://possible.lv/tools/hb/
How is it that Google and Hotmail were not vulnerable? Were they using their own implementations of SSL? I would have figured Google would make use of OpenSSL.
I'm not a security guru... So what kind of attack can this cause? Does this mean https will not be secured if the site uses vulnerable OpenSSL?
it means if you're running a bad version of openssl then someone can dump the entire contents of your ram, including public/private keys, and anything that is in memory such as passwords and even DB connections.
Who the hell went through the trouble of buying a domain name, building a website, and designing a logo just to talk about one bug?
Someone who wanted to be sure that bug stuck in people's minds enough that they wouldn't just ignore it? Seems at least feasible.
I haven't seen a discussion about whether this can also bypass those using 2-step verification. Does anyone here know?
Scary what the implications of this will be for OpenVPN traffic that has been captured and stored over the past 2 years.
None really.
Exploiting this requires a client that sends a malicious Heartbeat packet with a large payload size.
This is unlikely to have happened in the past without any malice.
gentoo has a flag for the TLS heartbeat, so its easy to turn off.
root# USE='-tls-heartbeat' emerge openssl
Remember to add it to /etc/portage/package.use for a permanent fix (unless you use ~arch for which 1.0.1g is now available).
Any scope here for SSL companies to actually have to make good on warranties they offer here?
So, many certificate authorities will need to re-issue certificates?
We use openvpn. Does that need to be updated?
As far as I can tell, openvpn with TLS authentication is vulnerable as it just uses the usual TLS suite. If you use PSKs or the (mis-named?) --tls-auth PSK additional MAC, then you are only owned if one of your own legitimate nodes revealed the PSK (or was coopted into performing this attack) in which case you're already owned.
Love this
> Is there a bright side to all this?
Does linux sshd have this bug ?
Mirror please?
They should call their team AnttiMattR
:"(Riku, Antti and Matti)
So, basically, it is the consequence of "quickly adding an implementation" of an extension of the TLS protocol to otherwise mature, more-or-less solid and "slightly" audited (at least by OpenBSD and FreeBSD teams) code base. OK. It happens.
btw, is OpenBSD affected or they did the job well by not blindly adding an unnecessary stuff (extensions) and bumping the versions without auditing the changes?
Apparent introduction of the bug: https://github.com/openssl/openssl/commit/bd6941cfaa31ee8a3f...
"goto fail;" doesn't seem that bad now huh. Lovely how these GNU/Linux freedom fighters were LOLling their asses off earlier, but when it happens to them they sweat themselves and cry for spoon-fed instructions to compile a software package from its sources.
- sent from my Mac
I found this video of security researchers publicly announcing the existence of the heartbleed bug: https://www.youtube.com/watch?v=7CkTYPnJS0E&list=PL0ECC73C46...