Comment by AshamedCaptain
5 years ago
> For every seeing-programmer I have ever met, and I suspect strongly all seeing-programmers, that length is measured in "source-code pixels". Not lines, or characters, but literal field of view.
By the same logic: font size affects number of bugs.
I still doubt it. First, the size of the mental model is definitely not related to physical source code length, but rather an abstract, hard to define "operation" concept. Therefore "hello world" is the same size, no matter how large your font size is nor how much whitespace there is between the prologue and the first statement/expression.
In fact, I would even argue, one's mental abstraction is farther from the actual on-screen code the more abbreviated your code is. If it reads like this:
MC(AV(z),AV(w),m*k); /* copy old contents */
if(b){ra(z); fa(w);} /* 1=b iff w is permanent */
*AS(z)=m1=AM(z)/k; AN(z)=m1*c; /* "optimal" use of space */
It doesn't matter how much space it occupies on screen. The simple mapping of names to identities is going to fill the entirety of your working memory. And I wouldn't believe you can "learn" this mapping. Our memory works in terms of concepts, not letters; the reason a 7 word passphrase is almost as easy to remember as a 7 character password. The identifiers here do not follow any discernible pattern (sometimes it's memset, other times it's MC instead of memcpy), and I would really doubt any structure can be followed at two chars per identifier. People already have problems remembering the much shorter and much more descriptive set of POSIX system calls.
> Sometimes you can read a program, and sometimes you can't, but when you can't, the damage that scrolling does seems infinitely worse.
I've worked for companies that used to remote into old X11 servers for viewing the code. Latency was measured in seconds. Impact of scrolling would have been huge. It was definitely not the biggest impact to productivity. In my experience, branchy code flow was still the biggest hinder.
> Yes, and usually by a factor of a thousand or more.
This would imply a "power law" of code reuse, where the code you are likely to need is closer to the point where you need it. The only way I would believe such a rule is, precisely, if your code base doesn't reuse any code at all and people just copy code "close to point of use" due to some arcane coding style.
My impression: I'm assuming people are cargo culting here.
> By the same logic: font size affects number of bugs.
Yes it does.
> I still doubt it.
https://www.ets.org/Media/Research/pdf/RR-01-23-Bridgeman.pd...
> Our memory works in terms of concepts, not letters; the reason a 7 word passphrase is almost as easy to remember as a 7 character password
And yet we write things down because our memory is limited, and the notation we choose strikes a balance between packing meaning into glyph, and the speed at which your thoughts can translate into the mark, so you really have this exactly backwards: You have to memorize less if you can see more of it.
> It was definitely not the biggest impact to productivity
"The quality of the software is rarely the biggest impact on a business", is perhaps the most depressing thing I've ever heard anyone say about their job.
I'd like to think that my work is a bit more important than that, and worth any expense to make me more productive at it.
> And I wouldn't believe you can "learn" this mapping.
It sounds like ¯\_(ツ)_/¯ you wouldn't believe a lot.