Comment by ereyes01

11 years ago

If I wasn't already a Go programmer, reading this would make me more interested in Go than not.

Simplicity is deceivingly challenging, and quite different from simple-mindedness, which seems to be what the author is accusing Go of being.

Keeping code simple, elegant, and consistent is IMHO one of the most valuable principles a team can adhere to. Simple is not necessarily shorter, as short can often be subtle and sneaky rather than simple. Complex power tools can be fun to the inquiring mind, but the ultimate consumers of your product almost never appreciate how you built it, but almost always appreciate the final outcome.

A wise man once said: “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?” - Brian Kernighan

There's also the argument that your chance of writing buggy code increases with each additional line you write.

Highly readable code helps reduce bugs. But if that code is also so simplistic that it necessitates a lot of verbosity, you definitely increase the chance of introducing some stupid bug. Thankfully, as a compiled language, Go can find a decent portion of the silly stupid bugs where in a more expressive language like Python they might go hidden for longer, but it's still a big tradeoff you're making.

  • This topic we're debating is very well studied. There's lots of good research out there that attempts to correlate how predictive certain attributes of code are to future bugs. Here's one paper in particular that is a very good read and widely cited: http://research.microsoft.com/pubs/70232/tr-2005-149.pdf

    As you would imagine, doing this is quite hard, and quite inconsistent from project to project. Lines of code may bear some correlation to future bugs in code for some projects and languages, but it's far from clear cut that more lines of code == more bugs or more complexity.

  • Go is not significantly more verbose than python except in a couple trivial cases (list comprehensions become a 3 line loop). Across even a moderately sized program, this will most likely not even amount for a statistically significant difference.

    • As a professional Python developer and amateur Go dabbler, I very vehemently disagree.

      Without access to list/dict/set comprehensions, or libraries like itertools, or a lightweight lambda syntax or something like Ruby's blocks, transformations on collections will always be considerably more tedious and verbose.

      Various functions have "bring-your-own-buffer" calling conventions, which will often double or triple the number of lines required for those function calls.

      No tuple unpacking, no ability for functional programming (Python's isn't that amazing but with functools and some of the builtins, you can get pretty far), no operators for string formatting or anything but absolute barebones operations.

      Combine that with what Go lacks even compared to languages like Java (no inheritance [which is usually an anti-pattern but does decrease verbosity when used properly], no generics, no deep type/class reflection) and it's hard to say that Go isn't a verbose language.

      No statically typed language is going to be as terse or expressive as Python can be.

      3 replies →

How will you keep your code simple with all those manual error checks?

How will you keep your code simple if your procedural bent causes you to create dependencies/coupling at will?

How will you keep your code simple if your variables are mutable?

Recipe for complexity and bug hell

  • manual error checks are no more complicated than any other code that branches.

    Implicitly fulfilled interfaces make it trivial to decouple code, because I can pass a type to a function in a different package, and that package doesn't even need to know that type exists.

    You're obviously pushing the pure functional route, which is nice, but 95% of code in production is procedural. There's a reason for that. I could speculate why, but I don't need to.

    • > I could speculate why, but I don't need to.

      I will. There's a pattern out there and it goes something like this:

      1) Most programming classes still teach procedural

      2) New, typical real-world programmer looks at procedural and functional, finds functional more alien and difficult (because it's far more dissimilar to the popular languages s/he played around with in their youth), settles on procedural

      3) 10 years later, after spinning a significant percent of wasted cycles on piles of procedural spaghetti code and spending hours debugging classes of bugs that ended up coming down to a lack of managing complexity, various interdependencies, no recognized schedule to pay down difficult-to-quantify technical debt, and mutability... programmer looks at functional languages again and finds them not lacking

      4) programmer takes on a side project in a functional language, thinks it's awesome, mind is bent in pleasant shapes, considers quitting day job or "the big rewrite", realizes the latter is not feasible (see: Joel Spolsky), gets frustrated with the status quo

      5) programmer posts to future incarnation of Hacker News probably sounding like an ivory-tower prophet, preaches about macros and homoiconicity and immutability and the dynamics of programming teams, ends up angel-investing and founding an incubator instead of fighting the hopeless procedural-vs.-functional battle

      6) Repeat.

      ;)

      EDIT: BIAS: I think Elixir (http://elixir-lang.org/) finally has a real shot at breaking this cycle. It's the first functional language I've used that "felt" accessible enough to most folks (Ruby-ish syntax) AND has all the typical functional niceties AND has real macros without being jarringly off-puttingly homoiconic AND has an extreme focus on concurrency (perfect for the future Web... it can fire off a million PID's in a second) AND it embraces failure (good for anything that touches "the real world") AND uses the Actor model (a candidate for "most scalable language design pattern"). OO is, IMHO, dead man walking right now. It's too easy to pass mutable state around everywhere, it complexifies code too much by allowing any code anywhere with an instance of a class to call methods on it (therefore coupling the class implementation to the entire codebase), it doesn't realize the cost of inheritance, it is difficult to parallelize, etc. etc.