Comment by lukan
9 months ago
It is still a lot of work.
And every external libary you do pull in, to ease some of the workload, is just waiting to go abandonend next year. So instead of focusing on release by that time, you will now focus on reinplementing that needed functionality that just stopped working.
> So instead of focusing on release by that time, you will now focus on reinplementing that needed functionality that just stopped working.
That makes no sense. A library being abandoned doesn't mean it suddenly stops working.
I wish that would be the case, but on the web stack, it happened more than once to me.
The web evolves. Lots of features get deprecated all the time and sometimes removed or change behavior in a significant way.
The library would continue to work but may no longer be usable if other dependencies require a later version of "n" that the abandoned library is incompatible with. Ruby or Python runtimes are the classic example.
Support for any studio larger than a few people is the most important part of 3rd party tools. If (and inevitably, when) a tool has a bug or can't do something you need, there's reassurance being able to tell someone else to fix the problem.
Using abandoned tools either means you're very sure all use cases are covered, or that your own engineers are willing to hack around should it not be sufficient. And I think anyone who works with legacy code knows that navigating already written codebases without guidance can take just as long as whipping up a custom implementation.
Definitely, it’s a trade off. Pulling in dubious dependencies can be a risk. Might be worth writing your own library or forking the dependency and vendoring it in your source.
There’s a spectrum of options here.