Comment by psykotic
5 years ago
(I'm the person he was quoting in the article.)
When I use setjmp/longjmp error handling I almost always want abort semantics but at the library level rather than at the OS process level. [1] Where applicable it's the simplest, most robust model I know. You have a context object that owns all your resources (memory blocks, file handles, etc) which is what lets you do simple and unified clean-up rather than fine-grained scoped clean-up in the manner of RAII or defer. You can see an example in tcc here:
https://github.com/LuaDist/tcc/blob/255ba0e8e34f999ee840407c...
https://github.com/LuaDist/tcc/blob/255ba0e8e34f999ee840407c...
[1] It goes without saying that a well-written library intended for general use is never allowed to kill the process. This presents a conundrum in writing systems-level C libraries. What do you do if something like malloc fails in a deep call stack within the library? Systems-level libraries need to support user-provided allocation functions which often work out of fixed-size buffers so failure isn't a fatal error from the application's point of view. You'd also want to use this kind of thing for non-debug assert failures for your library's internal invariants.
This style of setjmp/longjmp error handling works well for such cases since you can basically write the equivalent of xmalloc but scoped to the library boundary; you don't have to add hand-written error propagation to all your library functions just because a downstream function might have such a failure. I'm not doing this as a work-around for a lack of finally blocks, RAII or defer statements. It's fundamentally about solving the problem at a different granularity by erecting a process-like boundary around a library.
See my response to a parallel comment from dannas.
I can see some minor corner cases where it could be worthwhile but the mental overhead isn't worth it.
I've written plenty of realtime code but spending a lot of time on the code running in the interrupt handlers is mentally exhausting and error prone; I do that when I have no choice. Likewise I've written a lot of assembly code but it's been decades since I wrote a whole program that way -- I don't have enough fingers to keep track of all the labels and call paths.
E.g. just because c++ has pointers doesn't mean I use them very often. >90% of the cases can be references instead.