Comment by bob1029

24 days ago

"Batteries included" ecosystems are the only persistent solution to the package manager problem.

If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.

I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.

Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.

There are several issues with "Batteries Included" ecosystems (like Python, C#/.NET, and Java):

1. They are not going to include everything. This includes things like new file formats.

2. They are going to be out of date whenever a standard changes (HTML, etc.), application changes (e.g. SQLite/PostgreSQL/etc. for SQL/ORM bindings), or API changes (DirectX, Vulcan, etc.).

3. Things like data structures, graphics APIs, etc. will have performance characteristics that may be different to your use case.

4. They can't cover all nice use cases such as the different libraries and frameworks for creating games of different genres.

For example, Python's XML DOM implementation only implements a subset of XPath and doesn't support parsing HTML.

The fact that Python, Java, and .NET have large library ecosystems proves that even if you have a "Batteries Included" approach there will always be other things to add.

> In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography

Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.

Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.

  • What type of developer chooses UX and performance over security? So reckless.

    I removed the locks from all the doors, now entering/exiting is 87% faster! After removing all the safety equipment, our vehicles have significantly improved in mileage, acceleration and top speed!

    • >What type of developer chooses UX and performance over security? So reckless.

      Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.

      While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.

      Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.

      3 replies →

    • Better developer UX can directly lead to better safety. "You are holding it wrong" is a frequent source of security bugs, and better UX reduces the ways you can hold it wrong, or at least makes you more likely to hold it the right way

      2 replies →

    • "Security" is often more about corporate CYA than improving my actual security as a user, and sometimes in opposition, and there is often blatant disregard for any UX concession at all. The most secure system is fully encrypted with all copies of the encryption key erased.

  • I'm pretty sure it's really one HTTP library: urllib.request is built on top of http.client. But the very Java-inspired API for the former is awful.

  • > Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.

    The amount of time defining same data structures over and over again vs `pip install requests` with well defined data structures.

  •     >  Few languages have good models for evolving their standard library
    

    Can you name some examples?

    • Scala could be one example? When I upgraded to a newer version of the standard library (the Scala 2.13 or Scala 3 collections library), there was a tool, Scalafix [1], that could update my source code to work with the new library. Don't think it was perfect (don't remember), but helpful.

      [1] https://scalacenter.github.io/scalafix/

    • Personally I've heard Odin [1] to do a decent job with this, at least from what I've superficially learned about its stdlib and included modules as an "outsider" (not a regular user). It appears to have things like support for e.g. image file formats built-in, and new things are somewhat liberally getting added to core if they prove practically useful, since there isn't a package manager in the traditional sense. Here's a blog post by the language author literally named "Package Managers are Evil" [2]

      (Please do correct me if this is wrong, again, I don't have the experience myself.)

      [1] https://pkg.odin-lang.org/

      [2] https://www.gingerbill.org/article/2025/09/08/package-manage...

Irony is that Node has no need for Axios, native fetch support has been there for years, so in terms of network requests it is batteries included.

  • It doesn't matter. We pulled axios out of our codebase, but it still ends up in there as a child or peer from 40 other dependencies. Many from major vendors like datadog, slack, twilio, nx (in the gcs-cache extension), etc...

  • People use axios or ky because with fetch you inevitably end up writing a small wrapper on top of it anyway.

  • Because native fetch lack retries, error handling is verbose, search and body serialization create ton of boilerplate. I use KY http client, small lib on top of fetch with great UX and trusted maintainer.

  • I'm not sure fetch is a good server-side API. The typical fetch-based code snippet `fetch(API_URL).then(r => r.json())` has no response body size limit and can potentially bring down a server due to memory exhaustion if the endpoint at API_URL malfunctions for some reason. Fine in the browser but to me it should be a no-no on the server.

    • > I'm not sure fetch is a good server-side API. The typical fetch-based code snippet `fetch(API_URL).then(r => r.json())` has no response body size limit and can potentially bring down a server due to memory exhaustion if the endpoint at API_URL malfunctions for some reason. Fine in the browser but to me it should be a no-no on the server.

      Nor is fetch a good client-side API either; you want progress indicators, on both upload and download. Fetch is a poor API all-round.

    • You can pass to `fetch` an `AbortSignal` like `AbortSignal.timeout(5000)` as a simple and easy guard.

      If you also want to guard on size, iterating the `response.body` stream with for/await/of and adding a counter that can `abort()` a manual `AbortSignal` is relatively straightforward, though sounds complicated. You can even do that as a custom `ReadableStream` implementation so that you can wrap it back into `Response` and still use the `response.json()` shortcut. I'm surprised I'm not seeing a standard implementation of that, but it also looks straightforward from MDN documentation [1].

      [1] https://developer.mozilla.org/en-US/docs/Web/API/Streams_API...

    • Browser fetch can lean on the fact that the runtime environment has hard limits per tab and the user will just close the tab if things get weird. on the server you're right

  • Node fetch is relatively new. Wasn't marked stable until 2023, though I've used it since like 2018.

  • It doesn't have a need _now_. Axios is more than 10 years old now, and even before axios other libraries did the same utility of making requests easier

Batteries included systems are still susceptible to supply chain attacks, they just move slower so it’s not as attractive of a target.

I think packages of a certain size need to be held to higher standards by the repositories. Multiple users should have to approve changes. Maybe enforced scans (though with trivy’s recent compromise that wont be likely any time soon)

Basically anything besides lone developer can decide to send something out on a whim that will run on millions of machines.

  • While technically true, it's so much slower that it's essentially a different thing. Third party packages being attacked is a near daily occurrence. First party attacks happens on the timescale and frequency of decades.

    It's like the difference in protecting your home from burglars and foreign nation soldiers. Both are technically invaders to your home, but the scope is different, and the solutions are different.

  • > they just move slower so it’s not as attractive of a target.

    Well, there’s other things. Maven doesn’t allow you to declare “version >= x.y.z” and doesn’t run arbitrary scripts upon pulling dependencies, for one thing. The Java classpath doesn’t make it possible to have multiple versions of the same library at the same time. That helps a lot too.

    NPM and the way node does dependency management just isn’t great. Never has been.

The other thing that keeps coming up is the github-code-is-fine-but-the-release-artifact-is-a-trojan issue. It really makes me question if "packages" should even exist in JavaScript, or if we could just be importing standard plain source code from a git repo.

I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.

  • > I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.

    Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.

    Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(

    • `vendor/` folders give me the worst developer PTSD :p

      6 conflicting versions of jquery, and you know every single one of them was monkey patched, cemented into the codebase forever.

  • This might make things worse not better.

    Yes - the postinstall hook attack vector goes away. You can do SHA pinning since Git's content addressing means that SHA is the hash of the content. But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.

    This is basically what Deno's import maps tried to solve, and what they ended up with looked a lot like a package registry again.

    At least npm packages have checksums and a registry that can yank things.

    • You can just git submodule in the dependencies. Super easy. Also makes it straightforward to develop patches to send upstream from within your project. Or to replace a dependency with a private fork.

      In my experience, this works great for libraries internal to an organization (UI components, custom file formats, API type definitions, etc.). I don't see why it wouldn't also work for managing public dependencies.

      Plus it's ecosystem-agnostic. Git submodules work just as well for JS as they do for Go, sample data/binary assets, or whatever other dependencies you need to manage.

    • > But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.

      The irony is that this is actually the current best practice to defend against supply chain attacks in the github actions layer. Pin all actions versions to a hash. There's an entire secondary set of dev tools for converting GHA version numbers to hashes

      1 reply →

or you don't use a package manager where anyone can just publish a package (i.e. use your system package manager). There is still some risk, but it is much smaller. Like, if xz were distributed by PyPI or NPM, everyone would have been pwned, but instead it was (barely) found.

It's true that system repos doesn't include everything, but you can create your own repositories if you really need to for a few things. In practice Fedora/EPEL are basically sufficient for my needs. Right now I'm deploying something with yocto, which is a bit more limited in slection, but it's pretty easy to add my own packages and it at least has hashes so things don't get replaced without me noticing (to be fair, I don't know if the security practices of open-embedded recipes are as strong as Fedora...).

  • it's muddying what a package is. A package, or a distro, is the people who slave and labor over packaging, reviewing, deciding on versions to ship, having policies in place, security mailing lists, release schedules, etc.

    just shipping from npm crap is essentially the equivelant of running your production code base against Arch AUR pkgbuilds.

Fully agree with this! I think today .NET is probably the most batteries included platform you can get. This means that even if you use third-party libraries, these typically depend only on first-party dependencies, making it much less likely for something shady to sneak in.

  • With the notable exception of cross-platform audio.

    • Not really notable, aiui the only mainstream language with anything like that is JS in the browser

      And for good reason. There are enough platform differences that you have to write your own code on top anyway.

  • Kinda.

    With Bun I use less dependencies from NPM than I used from Nuget with .NET to build minimal apis. For example the pg driver.

  • To me, I really like Golang's batteries included platform. I am not sure about .NET though

    • C#'s LINQ (code as data, like LISP) wins over golang for any type of data access. Strongly-typed, language-native queries. Go has its own advantages though.

      1 reply →

  • And now with NativeAOT, you can use C# like go - you don't need to ship the CLR.

So, youre on Microsoft then, judging by ScottPlot you write .NET desktop apps. If you use Dapper, you probably use Microsoft.Data.SqlClient, which is... distributed over NuGet and vulnerable to supply chain attack. You may not need many deps as a desktop dev. Modern day line of business apps require a lot more deps. CSVHelper, ClosedXML, AutoMapper, WebOptimizer, NetEscapades.AspNetCore.SecurityHeaders.

Yes less deps people need the better but it doesn't fix trhe core problem. Sharing and distrib uting code is a key tenant of being able to write modern code.

I agree that dependencies are a liability, but, sadly, "batteries included" didn't work out for Python in practice (i. e. how do I even live without numpy? No, array aren't enough).

  • To the extend that Python is indeed "batteries included," that seems true. But just how "batteries included" is it? I'd argue that its batteries are pretty limited. Exhibit A: everybody uses the third-party requests instead of the stdlib urllib. Exhibit B: http.server isn't a production-ready webserver, so people use Flask or something beefier.

    I'd contrast Python with Go, which has an amazing stdlib for the domains that Go targets. This last part is key--Go has a more focused scope than Python, and that makes it easier for its stdlib to succeed.

    • > http.server isn't a production-ready webserver, so people use Flask [...]

      Nit, but relevant nit: Flask is also not a production-grade webserver. You could say it is also missing batteries ... and those batteries are often missing batteries too. Which is why you don't deploy flask, you deploy flask on top of gunicorn on top of nginx. It's missing batteries all the way down (or at least 3 levels down).

      1 reply →

    • We could have different Python package bundles: Python base. Python webdev. Python desktop.

This is a rather superlative and tunnel vision, "everything is a nail because I'm a hammer" approach. The truth is this is an exceedingly difficult problem nobody has adequately solved yet.

  • I think the AI tooling is, if not completely solving sandboxing, at least making the default much better by asking you every time they want to do something and providing files to auto-approve certain actions.

    Package managers should do the same thing

    • Another layer of AI tooling is the cost of spinning up your own version of some libraries is lowered and can be made hyper specific to your needs rather than pulling in a whole library with features you'll never use.

      1 reply →

    • > at least making the default much better by asking you every time they want to do something

      Really? I thought 'asking you every time they want to do something' was called 'security fatigue' and generally considered to be a bad thing. Yes you can concatenate files in the current project, Claude.

      2 replies →

> "Batteries included" ecosystems are the only persistent solution to the package manager problem.

The irony in this case is that axios is not really needed now given that fetch is part of the JS std lib.

Different programmers have very different ideas about what is "all the functionality you typically need."

What are some examples of batteries-included languages that folk around here really feel productive in and/or love? What makes them so great, in your opinion?

(Leaving aside thoughts on language syntax, compile times, tooling etc - just interested in people's experiences with / thoughts on healthy stdlibs)

  • These are the big ones I use, specifically because of the standard libraries:

    Python (decent standard library) - It's pretty much everywhere. There's so many hidden gems in that standard library (difflib, argparse, shlex, subprocess, cmd)

    C#/F# (.NET)

    C# feels so productive because of how much is available in .NET Core, and F# gets to tag along and get it all for free too. With C# you can compile executables down to bundle the runtime and strip it down so your executables are in the 15 MiB range. If you have dotnet installed, you can run F# as scripts.

    • These are definitely some good thoughts, thanks!

      Do you worry at all about the future of F#? I've been told it's feeling more and more like a second-class citizen on .NET, but I don't have much personal experience.

      2 replies →

  • Go is well known for its large and high quality std lib

    • Go didn't even have versioning for dependencies for ages, so CVE reporting was a disaster.

      And there's plenty of libraries you'll have to pull to get a viable product.

  • I work in a NIS2 compliance sector, and we basically use Go and Python for everything. Go is awesome, Python isn't as such. Go didn't always come with the awesome stllib that it does today, which is likely partly why a lot of people still use things like Gin for web frameworks rather than simply using the standard library. Having worked with a lot of web frameworks, the one Go comes with is nice and easy enough to extend. Python is terrible, but on the plus side it's relatively easy to write your own libraries with Python, and use C/Zig to do so if you need it. The biggest challenges for us is that we aren't going to write a better MSSQL driver than Microsoft, so we use quite a bit of dependencies from them since we are married with Azure. These live in a little more isolation than what you might expect, so they aren't updated quite as often as many places might. Still, it's a relatively low risk factor that we can accept.

    Our React projects are the contrast. They live in total and complete isolation, both in development and in production. You're not going to work on React on a computer that will be connected to any sort of internal resources. We've also had to write a novel's worth of legal bullshit explaining how we can't realistically review every line of code from React dependencies for compliance.

    Anyway, I don't think JS/TS is that bad. It has a lot of issues, but then, you could always have written your own wrapper ontop of Node's fetch instead of using Axios. Which I guess is where working in the NIS2 compliance sector makes things a little bit different, because we'd always chose to write the wrapper instead of using one others made. With the few exceptions for Microsoft products that I mentioned earlier.

    • This is really interesting, thanks for sharing. Great food for thought.

      Being tightly coupled with MS already, did you ever explore .NET?

      2 replies →

This just moves the trust from one group to another. Now the standard library/language maintainers need to develop/maintain more high quality software. So either they get overworked and burn out, don't address issues, fail to update things or they recruit more people who need to be trusted. Then they are responsible for doing the validation that you should have done. Are they better equipped to do that? Maybe they go, oh hey, Axios is popular and widely trusted, let's make it an official library and bring the maintainers into the fold... wait isn't this exactly where we started?

What process did you trust the standard library/language maintainers in the first place? How do they differ from any other major library vendor?

I agree with you and follow the same principles myself, but JavaScript already has HTTP, and yet everyone still uses Axios. So the problem isn't that JS doesn't have batteries, it's that people don't want to use them for some reason.

I'm guessing it's similar to the tragedy of the commons phenomenon. When things are freely available people tend to overuse or carelessly use them. NPM is just too easy to use. If a package offers a 1% ergonomics increase over a builtin function, many folks will just go for it because it costs them nothing (well, it seems to cost them nothing).

While it's true that the packages are first party, .NET still relies on packages to distribute code that's not directly inside the framework. You still probably transiently depend on `Microsoft.Extensions.Hosting.Abstractions ` for example - if the process for publishing this package was compromised, you'd still get owned.

Not at all. We simply need M-of-N auditors to sign off on major releases of things. And the package managers need to check this (the set of auditors can be changed, same as browser PKI for https) before pulling things down.

That's the system we have in our Safebox ecosystem

But javascript is batteries included in this case, you can use xmlhttprequest or fetch

For a lot of code, I switched to generating code rather than using 3rd party libraries. Things like PEG parsers, path finding algorithms, string sanitizers, data type conversion, etc are very conveniently generated by LLMs. It's fast, reduces dependencies, and feels safer to me.

  • Or find the best third party library and copy the code from a widely used version that has been out long enough to have been well tested into your source tree.

    The problem is not third party libraries. It is updating third party libraries when the version you have still works fine for your needs.

    • Don't do this. Use a package manager that let's you specify a specific version to pin against. Vendoring side steps most automated tooling that can warn you about vulnerabilities. Vendoring is a signal that your tooling is insufficient, 99% of the time.

      5 replies →

yep!

This is exactly the world I'm working towards with packaging tooling with a virtual machine i.e. electron but with virtual machines instead so the isolation aspect comes by default.

Honestly, you can get pretty far with just Bun and a very small number of dependencies. It’s what I love most about Bun. But, I do agree with you generally. .NET is about as good as I’ve ever seen for being batteries included. I just hate the enterprisey culture that always seems to pervade .NET shops.

  • I agree about the culture. If I take my eye off the dev team for too long, I'll come back and we'll be using entity framework and a 20 page document about configuring code cleanup rules in visual studio.

Language churn makes this problem worse.

Frankly inventing a new language is irresponsible these days unless you build on-top of an existing ecosystem because you need to solve all these problems.

I agree. Got downvoted a lot the other day for proposing Node should solve fundamental needs.

> "Batteries included" ecosystems are the only persistent solution

Or write your own stuff. Yes, that's right, I said it. Even HTTP. Even cryptography. Just because somebody else messed it up once doesn't mean nobody should ever do it. Professional quality software _should_ be customized. Professional developers absolutely can and should do this and get it right. When you use a third-party HTTP implementation (for example), you're invariably importing more functionality than you need anyway. If you're just querying a REST service, you don't need MIME encoding, but it's part of the HTTP library anyway because some clients do need it. That library (that imports all of its own libraries) is just unnecessary bloat, and this stuff really isn't that hard to get right.

  • > When you use a third-party HTTP implementation (for example), you're invariably importing more functionality than you need anyway. If you're just querying a REST service, you don't need MIME encoding, but it's part of the HTTP library anyway because some clients do need it. That library (that imports all of its own libraries) is just unnecessary bloat, and this stuff really isn't that hard to get right.

    This post is modded down (I think because of the "roll your own crypto vibe", which I disagree with), but this is actually spot on the money for HTTP.

    The surface area for HTTP is quite large, and your little API, which never needed range-requests, basic-auth, multipart form upload, etc suddenly gets owned because of a vulnerability in one of those things you not only never used, you also never knew existed!

    "Surface area" is a problem, reducing it is one way to mitigate.

    • > the "roll your own crypto vibe", which I disagree with

      Again, you run into the attack surface area here. Think about the Heartbleed vulnerability. It was a vulnerability in the DTLS implementation of OpenSSL, but it affected every single user, including the 99% that weren't using DTLS.

      Experienced developers can, and should, be able to elide things like side-channel attacks and the other gotchas that scare folks off of rolling their own crypto. The right solution here is better-defined, well understood acceptance criteria and test cases, not blindly trusting something you downloaded from the internet.

      2 replies →