Comment by geocar

5 years ago

> This argument is significantly weakened when simply removing the meaningless macros and adding whitespace improves readability.

I disagree wholeheartedly. Whitespace tends to move code further away from the code that uses it; scrolling and tab-flipping requires the developer hold that code in their head where it is most likely to degrade. It is much much better to make them remember as little as possible.

It also helps reuse: If you don't scroll, you can see more code, so you can see if something is a good candidate for abstracting, just because you can see them doing the same thing.

Macros like this help with that, and so they aren't "meaningless". Less whitespace helps with that, and so it actually improves readability (to those who have to read it!).

The trade-off is that you can't hire someone with "C" on their CV and expect them to be productive on their first day, but in a complex codebase, this might not be possible for other reasons.

I have a hard time believing that increasing the size of your terminal "helps reuse".

First, I do not agree that working memory is any significant limit when analyzing code, specially because one of the first steps is going to create the mental abstraction that allows you to, precisaly, understand the code. The density of that abstraction is definitely uncorrelated to the amount of whitespace. Thus, scrolling is only going to be an issue for the first couple of reads.

Second, say your patented steganography mechanism manages to fit 3x the amount of "code" in the same terminal size (and I am being generous). Is this going to increase "code reuse" by any significant amount?

  • > one of the first steps is going to create the mental abstraction that allows you to, precisaly, understand the code.

    Precisely.

    Now a short program is "short enough" that you can convince yourself it is correct; That is to say, I'm sure you can imagine writing "hello world" without making a mistake, and that there is some threshold of program length where your confidence in error-free programming will be lost. For every seeing-programmer I have ever met, and I suspect strongly all seeing-programmers, that length is measured in "source-code pixels". Not lines, or characters, but literal field of view. Smaller screen? More bugs.

    Where you are forced to deal with your application in terms of the mental abstraction, rather than what the code actually says it does, it is simply because that code is off-screen, and that mental abstraction is a sieve: If you had any true confidence in it, you would not believe that program length correlates with bugs.

    > scrolling is only going to be an issue for the first couple of reads.

    I've worked on codebases large enough that they've taken a few years to read fully, and codebases changing so quickly that there's no point to learn everything. Sometimes you can read a program, and sometimes you can't, but when you can't, the damage that scrolling does seems infinitely worse.

    > Is this going to increase "code reuse" by any significant amount?

    Yes, and usually by a factor of a thousand or more.

    • > For every seeing-programmer I have ever met, and I suspect strongly all seeing-programmers, that length is measured in "source-code pixels". Not lines, or characters, but literal field of view.

      By the same logic: font size affects number of bugs.

      I still doubt it. First, the size of the mental model is definitely not related to physical source code length, but rather an abstract, hard to define "operation" concept. Therefore "hello world" is the same size, no matter how large your font size is nor how much whitespace there is between the prologue and the first statement/expression.

      In fact, I would even argue, one's mental abstraction is farther from the actual on-screen code the more abbreviated your code is. If it reads like this:

           MC(AV(z),AV(w),m*k);                 /* copy old contents      */
           if(b){ra(z); fa(w);}                 /* 1=b iff w is permanent */
           *AS(z)=m1=AM(z)/k; AN(z)=m1*c;       /* "optimal" use of space */
      

      It doesn't matter how much space it occupies on screen. The simple mapping of names to identities is going to fill the entirety of your working memory. And I wouldn't believe you can "learn" this mapping. Our memory works in terms of concepts, not letters; the reason a 7 word passphrase is almost as easy to remember as a 7 character password. The identifiers here do not follow any discernible pattern (sometimes it's memset, other times it's MC instead of memcpy), and I would really doubt any structure can be followed at two chars per identifier. People already have problems remembering the much shorter and much more descriptive set of POSIX system calls.

      > Sometimes you can read a program, and sometimes you can't, but when you can't, the damage that scrolling does seems infinitely worse.

      I've worked for companies that used to remote into old X11 servers for viewing the code. Latency was measured in seconds. Impact of scrolling would have been huge. It was definitely not the biggest impact to productivity. In my experience, branchy code flow was still the biggest hinder.

      > Yes, and usually by a factor of a thousand or more.

      This would imply a "power law" of code reuse, where the code you are likely to need is closer to the point where you need it. The only way I would believe such a rule is, precisely, if your code base doesn't reuse any code at all and people just copy code "close to point of use" due to some arcane coding style.

      My impression: I'm assuming people are cargo culting here.

      1 reply →