← Back to context

Comment by yodon

3 days ago

This feels like a lot of rationalization for the purpose of excusing writing exactly the sort of code that Kernighan advised against.

Advising against writing complex code is not advising against learning.

The person who solves a hard problem correctly using simple code has generally spent more time learning than the person who solves it using complex code.

Looking at all he has done, I don't think he means "complex" when he says "clever". He's not advocating for (and most likely against) the architecture-astronautism of overengineering that some people seem to be associating with "clever" here.

He means code that appears indecipherable at first glance, but then once you see how it works, you're enlightened. Simple and efficient code can be "clever".

  • I think clever is being used in two different ways, in that case.

    In the original quote, “clever” refers to the syntax, where they way the code was constructed makes it difficult to decipher.

    I believe your interpretation (and perhaps the post’s, as well) is about the design. Often to make a very simple, elegant design (what pieces exist and how they interact) you need to think really hard and creatively, aka be clever.

    Programming as a discipline has a problem with using vague terms. “Clean” code, “clever” code, “complex” code; what are we trying to convey when we talk about these things?

    I came up with a term I like: Mean Time to Comprehension, or MTC. MTC is the average amount of time it takes for a programmer familiar with the given language, syntax, libraries, tooling, and structure to understand a particular block of code. I find that thinking about code in those terms is much more useful than thinking about it in terms of something like “clever”.

    (For anyone interested, I wrote a book that explores the rules for writing code that is meant to reduce MTC: The Elements of Code https://elementsofcode.io)

  • Good code should not be immediately understandable. Machines that do pasta do not look like humans that do pasta. Same for code; good code does things in a machine way and it won't look natural.

    Example: convert RGB to HSV. If you look around for a formula, you'll likely find one that starts so:

        cmin = min(r, g, b);
        cmax = max(r, g, b);
    

    Looks very natural to a human. Thing is, as we compute 'cmin', we'll also compute or almost compute 'cmax', so if we rewrite this for a machine, we should merge these two into something that will be way less clear on the first glance. Yet it will be better and make fewer actions (the rest of the conversion is even more interesting, but won't fit into a comment).

    • Recognizing that sort of opportunity is why we have optimizing compilers and intrinsics.

      Funny thing: in Python code I've had a few occasions where I needed both quotient and remainder of an integer division, so naturally I used `divmod` which under the hood can exploit the exact sort of overlap you describe. I get the impression that relatively few Python programmers are familiar with `divmod` despite it being a builtin. But also it really doesn't end up mattering anyway once you have to slog through all the object-indirection and bytecode-interpretation overhead. (It seems that it's actually slower given the overhead of looking up and calling a function. But I actually feel like invoking `divmod` is more intention-revealing.)

    • In short your stance is to sacrifice readability for performance.

      Legit in some cases. But for usual business software, code is for humans (compiler will make machine code intended for the machine)

      3 replies →

Personally, I see a kind of arc in programming style over time. It does begin naive, and more-experienced you will look back at your early code realizing you were essentially re-inventing the wheel in one place or you may see now that a look-up table would have been more efficient (as examples).

As you learn more techniques and more data structures the "cleverness" creeps into your code. To the degree that the cleverness might have a complexity cost, sometimes the cost may be worth it—perhaps not always though.

Naive-you would have struggled to understand some of the shortcuts and optimizations you are leveraging.

But then still more-experienced you revisits the more clever code with years now to have both written and attempted to debug such code. You may now begin to eschew the "clever" to the degree its cleverness makes the code harder to understand or debug. You might swear off recursive code for example—breaking it into two functions where the outer one runs a loop of some sort that is easier to set a break-point in and unwind a problem you were seeing. Or you might now lean more on services provided by the platform you are programing for so you don't have to have your own image cache, your own thread manager, etc.

I feel like in that last stage, most-experienced you may well be writing code that naive-you could have understood and learned from.

Yes, I agree this is true in some (many?) cases. But it is also true that sometimes the more complex solution is better, either for performance reasons or because it makes things simpler for users/API callers.

  • Yes, there's a valid argument that simple code is not always best performance. Optimizing simple code usually makes it more complex.

    But I think the main point stands. There's an old saying that doing a 60 minute presentation is easy, doing one in 15 minutes us hard. In other words writing "clever" (complicated) code is easy. Distilling it down to something simple is hard.

    So the final result of any coding might be "complex", "simplified from complex", or "optimized from simple".

    The first and third iterations are superficially similar, although likely different in quality.