Comment by onion2k

6 years ago

Maybe this works for potters but I don't think it'd work for developers. If a compSci lecturer suggested one group would get an A for writing a perfect, bug free app and the other group would get an A for writing 50,000 lines of code, I wouldn't expect the group writing as much code as they could to make the best app.

Knuth suggests, in TAOCP, that the best way to write a program is to write it once, scrap it completely and rewrite from scratch. Very much in line with spending an entire semester making one perfect program vs spending a semester writing and rewriting new programs from scratch.

  • I used this extensively for code assignments in university, but for a different reason. We had to do most things in C, which can be very painful for prototyping and experimenting with new approaches. So I would solve the problem first in Python, try different approaches, and then after I was happy with the solution I'd write it again in C. Saved me a lot of time in the end.

  • First pass: you figure out which problems need to be solved. Second pass: you already "what" figured out, now you can focus on how to do it nicely.

  • A better approach in my experience is to grow the whole system from a “skeleton” implementation, refactoring the architecture and code as your understanding grows. With the key use cases being your guide.

Group B would be assigned to write 50 apps in this case. In your example 50000 LOC would be like making a really giant pot.

  • In the pottery example Group A were marked by weight ("fifty pound of pots rated an “A”, forty pounds a “B”, and so on."). Making a single giant pot could get you an A if the lecturer wasn't pedantic about the meaning of "pots".

    • Well, if you're making "pots", you're not making "vases", "pans" or "thimbles". The experiment doesn't work if you don't set any acceptance criteria.

I think it does work for developers. You also learn programming by doing it, rather than by thinking about it. Write 50 apps, and your 50th is bound to be better than if you start out trying to write the perfect app on your first try.

I think if doing this I would have a group split so one has to create an app that's finished (defining that would be fun) and one that has to write one that's code is 'clean' (again, fun defining that).

I say this because I've worked with a lot of inexperienced developers that commonly have a stack of half finished projects, or things they've re-written half a dozen times to get the architecture right. I'm my experience you learn a lot by finishing things, and being constrained by previous decisions.

Bonus points if you can get customers relying on your work, nothing motivates like support emails.

I think it could potentially work!

Consider someone with little to no experience with programming trying to build a single app and planning it out and refactoring and refining it continuously, compared to constantly creating and archiving multiple toy prototypes of the same app and then considering the best among those.

(The app's size and complexity should be commensurate to a single clay pot in a ceramics course.)

It requires the students to actually try to make something of quality, sure, but only to the same degree that the students in the original (apocryphal) story did- after all, those students could easily have shown up with a pile of baked misshapen clay instead of proper pots.

This would be analogous to

creating 1 pot using 10 tons of clay = writing 50,000 lines of code for 1 project

vs

writing lots of different projects = creating lots of different pots.

  • That only works if the second option is constrained to make all the projects approximately the same number of lines of code.

If they had to make a ~1000 line program 50 times the analogy would work better.

in the extremes, neither of approaches work, if you are just fighting yesterday's bugs or paralised by choice and perfecction, both are bad ideas.

It's like order and chaos, what you want is to find a balance that allows you to explore, and leave things a bit open so they can also be extended, but also sound and precisse enough so that it actually works, and does the job.

In this hypothetical scenario, instead of counting final lines of code, maybe we should count lines of affected code: take each commit and count how many lines were added, modified or deleted.

That matches "the quantity trumps quality" idea above more closely in our industry.

How about for writing the same app 5 times, no matter the quality of it? Is that a better analogy?