Comment by fabian2k

2 years ago

A library can only stay static if the environment it's used in is also static. And many of the environments in which modern software is developed are anything but static, web frontends are one example where things change quite often.

A library that can stand entirely on its own might be fine if it's never updated. But e.g. a library that depends on a web frontend framework will cause trouble if it is not updated to adapt to changes in the ecosystem.

Also, even a very stable project that is "done" will receive a trickle of minor tweak PRs (often docs, tests, and cleanups) proportional to the number of its users, so the rate of change never falls to zero until the code stops being useful.

  • I think this is also in inverse proportion to the arcane-ness of the intended use of the code, though.

    Your average MVC web framework gets tons of these minor contributors, because it's easy to understand MVC well enough to write docs or tests for it, or to clean up the code in a way that doesn't break it.

    Your average piece of system software gets some. The Linux kernel gets a few.

    But ain't nobody's submitting docs/tests/cleanups for an encryption or hashing algorithm implementation. (In fact, AFAICT, these are often implemented exactly once, as a reference implementation that does things in the same weird way — using procedural abstract assembler-like code, or transpiled functional code, or whatever — that the journal paper describing the algorithm did; and then not a hair of that code is ever touched again. Not to introduce comments; not to make the code more testable; definitely not to refactor things. Nobody ever reads the paper except the original implementor, so nobody ever truly understands what parts of the code are critical to its functioning / hardening against various attacks, so nobody can make real improvements. So it just sits there.)

  • I disagree. Tiny libraries can be fine indefinitely. For example this little library which inverts a promise in JavaScript.

    I haven’t touched this in years and it still works fine. I could come in and update the version of the dependencies but I don’t need to, and that’s a good thing.

    https://github.com/josephg/resolvable

    • I think total number of commits is probably a good metric too. If the project only has 7 commits to begin with then it's unlikely to get any more updates after it's "done". But a 10 year old project with 1000 commits where the last commit was 3 years ago is a little more worrying.

  • > so the rate of change never falls to zero until the code stops being useful

    Non-useful software changes all the time ;) Also, Useful software stands still all the time, without any proposed changes.

Even if the environment it's used in is not static, the world it lives in is not static.

I work in industrial automation, which is a slow-moving behemoth full of $20M equipment that get commissioned once and then run for decades. There's a lot of it still controlled with Windows 98 PCs and VB6 messes and PXI cards from the 90s, even more that uses SLC500 PLCs.

But when retrofitting these machines or building new ones, I'll still consider the newness of a tool or library. Modern technology is often lots more performant, and manufacturers typically support products for date-on-market plus 10 years.

There's definitely something to be said for sticking with known good products, but even in static environments you may want something new-ish.

As someone who migrated a somewhat old project to one which uses a newer framework, I agree with this. The amount of time I spent trying to figure out why and old module was broken before realizing that one of it's dependencies was using ESM even though it was still using CJS... I don't even want to think about it. Better to just make sure that a module was written or updated within the last 3 years because that will almost certainly work.

This is a very strange example. Browsers have fantastic backwards compatibility. You can use the same libraries and framework you used ten years ago to make a site and, with very few exceptions, it will work perfectly fine in a modern browser.

  • Browsers have decent backwards compatibility for regular webpages, but there’s a steady stream of breakage when it comes to more complex content, like games. The autoplay policy changes from 2017-2018, the SharedArrayBuffer trainwreck, gating more and more stuff behind secure contexts, COOP/COEP or other arbitrary nonsense... all this stuff broke actual games out in the wild. If you made one with tools from 10 years ago you would run into at least a couple of these.

  • Browsers themselves aren't usually the problem. While sometimes they make changes, like what APIs are available without HTTPS, I think you're right about their solid backwards compatibility.

    What people really mean when they talk about the frontend is the build system that gets your (modern, TypeScript) source code into (potentially Safari) browsers.

    Chrome is highly backwards compatible. Webpack, not so much.

    This build system churn goes hand-in-hand with framework churn (e.g. going from Vue 2 to 3, while the team have put heaps of effort into backwards compatibility, is not issue-free), and more recently, the rise of TypeScript and the way the CJS to ESM transition has been handled by tools (especially Node).

  • The problem arises when you're not using old libraries and frameworks. You're using new stuff, and come across an old, unmaintained library you'd like to use.

    Hey, it uses the same frameworks you're using --- except, oh, ten years ago.

    Before you can use it, you have to get it working with the versions of those frameworks you're using today.

    Someone did that already before you. They sent their patch to the dead project, but didn't get a reply, so nobody knows about it.

  • Yeah, but there are still thousands (hundreds of thousands?) of games on Newground that you can no longer play without running a separate flash player

  • You absolutely can do that, but it is likely the final output will have numerous exploitable vulnerabilities.

>> web frontends are one example where things change quite often.

There is a world of difference between linux adding USB support and how web front ends have evolved. One of them feels like they are chasing the latest shiny object...

A VM with a fixed spec can delegate os churn and the like to the VM maintainers and thus protect the managed code authors aka the JVM.

Does it help the dependency ecosystem churn? No.

Until be get very fine grained api versioning info (like at method / function granularity and even then is it good enough and what oss author could maintain that info outside of a small api) then library version info will simply be a coarse grained thing.

If only there was a super smart AI with great breadth of knowledge with capabilities to infer this relationship graph, but I don't think there's a lot of research into AIs like that these day, right?