Comment by hit8run

19 hours ago

I can relate. But home-cooking is also often reinventing the wheel. Core logic should be home-cooked but general aspects are probably best solved using battle proven 3rd party dependencies? What you guys think?

Producing that general-purpose API from "battle proven 3rd party dependencies" comes at a cost. IME:

1. The API isn't quite what you need, so you add enough extra logic to integrate it that you might as well have reinvented the wheel instead and had a proper solution, plus you have all the bugs and performance problems associated with bolting on extra code to a domain mismatch issue.

2. The API covers cases you don't care about, opening you up to bugs and performance problems from the project's complexity while not actually saving much time since the cases you care about can be handled more simply.

3. Both (1) and (2) are amplified as you change the project going forward. If you wrote the data structure, you could easily tack on the modifications you need. If not, in the presence of desired modifications you needed to re-implement it anyway (or bolt on even more hacks), and the 3rd party isn't a good starting point because of (2).

For something like cryptography, with all the ways constant-time execution and whatnot can bite me in the ass, I'm happy to use a 3rd-party dependency. For almost everything else I'll home-cook it. Networking starts at io_uring. CPU float-intensive software starts with (depending on the domain) a Tensor type capable of lazy, fused operations [0].

That's just a heuristic I use, and from time to time I'll definitely take a shortcut. More often than not though, I'll be replacing the shortcut in less than a year, and the value from having the feature sooner is only sometimes worth the lost productivity. The driving factor is just that as I learn more it's easier and easier to home-cook those sorts of things, whereas the costs induced by 3rd parties haven't really diminished.

[0] I'm curious what a language designed around cache obliviousness and reducing data dependencies might look like. There are some patterns I've used which compose nicely, but they only solve part of the problem. Even custom vector languages like ISPC require a fair bit of work from the programmer.

I think the article is a false dichotomy but to answer your question: Most 3rd party dependencies can be accomplished easily if you know how to program (things like crypography or well defined algorithms aside). Frameworks and libraries are more like tooling, though they usually have an incentive to lock you into their view of the world.

If you are a good programmer/ independent thinker you will tend to just write software from first principles with limited tooling. Its leaner and faster to build that way. Its usually more effective software. 3rd party libraries though easy to implement bring their own interfaces/paradigms. They require maintenance/security updates. They are often written by individuals who care little about performance.

There are TONS of exceptions to what I am saying above and tons of great packages that I use frequently, but if the default is to try and solve a problem by installing a package now you have 2 problems.

  • Absolutely but the decision does require some form of experience. For example you'd not want to home cook my own logger and instead rely on a battle tested one. However, if you want to trim/pad strings you could avoid a third party dependency easily.

A good use for home-cooked software is a UI layer over something clunky, or a glue layer over several systems that are cumbersome to use.

For me, that's the best use of time without re-inventing the wheel—though this is specifically for network apps and for addressing pain points of existing SaaS products.

now with LLMs, I've been recreating just the features I need from 3rd party dependencies and getting to know much more about how it works under the hood