← Back to context

Comment by derefr

2 years ago

Depends on the language.

Some languages have releases every year or two where they will introduce some new, elegant syntax (or maybe a new stdlib ADT, etc) to replace some pattern that was frequent yet clumsy in code written in that language. The developer communities for these languages then usually pretty-much-instantly consider use of the new syntax to be "idiomatic", and any code that still does things the old, clumsy way to need fixing.

The argument for making the change to any particular codebase is often that, relative to the new syntax, the old approach makes things more opaque and harder to maintain / code-review. If the new syntax existed from the start, nobody would think the old approach was good code. So, for the sake of legibility to new developers, and to lower the barrier to entry to code contributions, the code should be updated to use the new syntax.

If a library is implemented in such a language, and yet it hasn't been updated in 3+ years, that's often a bad sign — a sign that the developer isn't "plugged into" the language's community enough to keep the library up-to-date as idiomatic code that other developers (many of whom might have just learned the language in its latest form from a modern resource) can easily read. And therefore that the developer maybe isn't interested in receiving external PRs.

I wonder if anyone ever took it scientifically and A/B tested it on a codebase. A community is fine all these years before a change, but afterwards all that instantly becomes a bad practice and loses legibility. I’m confident that it mostly gets done not for any objective result, but because most developers are anxious perfectionists in need of a good therapist. And that’s plague-level contagious. Some people get born into this and grow up being sick.