Comment by AshamedCaptain

5 years ago

I have a hard time believing that increasing the size of your terminal "helps reuse".

First, I do not agree that working memory is any significant limit when analyzing code, specially because one of the first steps is going to create the mental abstraction that allows you to, precisaly, understand the code. The density of that abstraction is definitely uncorrelated to the amount of whitespace. Thus, scrolling is only going to be an issue for the first couple of reads.

Second, say your patented steganography mechanism manages to fit 3x the amount of "code" in the same terminal size (and I am being generous). Is this going to increase "code reuse" by any significant amount?

> one of the first steps is going to create the mental abstraction that allows you to, precisaly, understand the code.

Precisely.

Now a short program is "short enough" that you can convince yourself it is correct; That is to say, I'm sure you can imagine writing "hello world" without making a mistake, and that there is some threshold of program length where your confidence in error-free programming will be lost. For every seeing-programmer I have ever met, and I suspect strongly all seeing-programmers, that length is measured in "source-code pixels". Not lines, or characters, but literal field of view. Smaller screen? More bugs.

Where you are forced to deal with your application in terms of the mental abstraction, rather than what the code actually says it does, it is simply because that code is off-screen, and that mental abstraction is a sieve: If you had any true confidence in it, you would not believe that program length correlates with bugs.

> scrolling is only going to be an issue for the first couple of reads.

I've worked on codebases large enough that they've taken a few years to read fully, and codebases changing so quickly that there's no point to learn everything. Sometimes you can read a program, and sometimes you can't, but when you can't, the damage that scrolling does seems infinitely worse.

> Is this going to increase "code reuse" by any significant amount?

Yes, and usually by a factor of a thousand or more.

  • > For every seeing-programmer I have ever met, and I suspect strongly all seeing-programmers, that length is measured in "source-code pixels". Not lines, or characters, but literal field of view.

    By the same logic: font size affects number of bugs.

    I still doubt it. First, the size of the mental model is definitely not related to physical source code length, but rather an abstract, hard to define "operation" concept. Therefore "hello world" is the same size, no matter how large your font size is nor how much whitespace there is between the prologue and the first statement/expression.

    In fact, I would even argue, one's mental abstraction is farther from the actual on-screen code the more abbreviated your code is. If it reads like this:

         MC(AV(z),AV(w),m*k);                 /* copy old contents      */
         if(b){ra(z); fa(w);}                 /* 1=b iff w is permanent */
         *AS(z)=m1=AM(z)/k; AN(z)=m1*c;       /* "optimal" use of space */
    

    It doesn't matter how much space it occupies on screen. The simple mapping of names to identities is going to fill the entirety of your working memory. And I wouldn't believe you can "learn" this mapping. Our memory works in terms of concepts, not letters; the reason a 7 word passphrase is almost as easy to remember as a 7 character password. The identifiers here do not follow any discernible pattern (sometimes it's memset, other times it's MC instead of memcpy), and I would really doubt any structure can be followed at two chars per identifier. People already have problems remembering the much shorter and much more descriptive set of POSIX system calls.

    > Sometimes you can read a program, and sometimes you can't, but when you can't, the damage that scrolling does seems infinitely worse.

    I've worked for companies that used to remote into old X11 servers for viewing the code. Latency was measured in seconds. Impact of scrolling would have been huge. It was definitely not the biggest impact to productivity. In my experience, branchy code flow was still the biggest hinder.

    > Yes, and usually by a factor of a thousand or more.

    This would imply a "power law" of code reuse, where the code you are likely to need is closer to the point where you need it. The only way I would believe such a rule is, precisely, if your code base doesn't reuse any code at all and people just copy code "close to point of use" due to some arcane coding style.

    My impression: I'm assuming people are cargo culting here.

    • > By the same logic: font size affects number of bugs.

      Yes it does.

      > I still doubt it.

      https://www.ets.org/Media/Research/pdf/RR-01-23-Bridgeman.pd...

      > Our memory works in terms of concepts, not letters; the reason a 7 word passphrase is almost as easy to remember as a 7 character password

      And yet we write things down because our memory is limited, and the notation we choose strikes a balance between packing meaning into glyph, and the speed at which your thoughts can translate into the mark, so you really have this exactly backwards: You have to memorize less if you can see more of it.

      > It was definitely not the biggest impact to productivity

      "The quality of the software is rarely the biggest impact on a business", is perhaps the most depressing thing I've ever heard anyone say about their job.

      I'd like to think that my work is a bit more important than that, and worth any expense to make me more productive at it.

      > And I wouldn't believe you can "learn" this mapping.

      It sounds like ¯\_(ツ)_/¯ you wouldn't believe a lot.