← Back to context

Comment by jvanderbot

5 months ago

Wait are you making the opposite claim? That one should eschew the "correct" formulation in favor of a bespoke one? Despite the stated (and hopefully obvious) difficulties that brings with maintenance, generalization, etc?

You probably haven't see front-end projects that pulls tons of library for a simple sorting or grouping task. Sometimes even solvable with build-in array function alone. It's a true nightmare when you have to deal with that kind of projects.

  • What, like lodash? Some of those hail from a time that we didn't have a good set of native methods. So the library is just legacy'd in. But I do agree, lodash performance compared to native functions is crazy bad.

    • Is that the case? I saw a talk by the author of lodash years ago and he touched on performance. The built-in functions are (or were?) implemented in JS, level terrain for a library like lodash to beat "native" performance. Lodash beat browser built-ins in some cases. The talk was ten years ago, though, so things may have changed. Perhaps more of the built-ins are written in C++ now.

      Here's the talk: https://youtu.be/2DzaOnOyCqE?si=McCMjzGopzSCoaCi

      4 replies →

I'm sorry but after reading your comment I can't seem to be able to decide if you favor writing the dynamic programming version or pulling in the constraint solver.

Both are correct in the sense that they give the right output, and I don't think pulling in a huge library (maintained by who knows and for how long) is going to be beneficial for maintenace. And having a good understanding of both the precise requirements, and the algorithms used to solve them also helps maintenance.

It's just having two dozen lines tucked of code away in a function in the repo seems infinitely more maintainable to me then using some giant framework (of possibly unknown quality) to solve the issue.

This is a general argument I'm making, not just applying to this constraint solver/

  • There are other aspects to maintenance, like requirements change. In this case it's trivial to change or add new constraints to a constraint solver, whereas even small changes to a typical DP problem can require a total rethink of the approach. Extending the analogy to other kinds of dependencies left as an exercise for the reader.

    Point being that software has many dimensions. Reducing the use of dependencies to fear of learning or thinking is a bit reductive in my opinion, even for stuff that seems simple initially.

    • Imo, deepending on the desired quality of the result and the amount and complexity of bespoke requirements, the more of the former are present, the more strongly I consider rolling something bespoke.

      With out-of-the-box libraries, the more custom requirements I have, the more trouble I tend to have supporting them, and trying to make something do a thing it wasn't designed for, can erase initial gains very quickly. At least this has been my experience over the years.

      1 reply →