Comment by ChrisMarshallNY
2 days ago
> the kind of dependency developers install without a second thought
Kind of a terrifying statement, right there.
2 days ago
> the kind of dependency developers install without a second thought
Kind of a terrifying statement, right there.
yeah i mean this is a tough problem. unless you work for a government contractor where they have strict security policies, most devs are just going to run npm install without a second thought as there are a lot of packages.
i dont know what the solution here is other than stop using npm
> i dont know what the solution here is other than stop using npm
Personally I think we need to start adding capability based systems into our programming languages. Random code shouldn't have "ambient authority" to just do anything on my computer with the same privileges as me. Like, if a function has this signature:
Then it should only be able to read its input, and return any integer it wants. But it shouldn't get ambient authority to access anything else on my computer. No network access. No filesystem. Nothing.
Philosophically, I kind of think of it like function arguments and globals. If I call a function foo(someobj), then function foo is explicitly given access to someobj. And it also has access to any globals in my program. But we generally consider globals to be smelly. Passing data explicitly is better.
But the whole filesystem is essentially available as a global that any function, anywhere, can access. With full user permissions. I say no. I want languages where the filesystem itself (or a subset of it) can be passed as an argument. And if a function doesn't get passed a filesystem, it can't access a filesystem. If a function isn't passed a network socket, it can't just create one out of nothing.
I don't think it would be that onerous. The main function would get passed "the whole operating system" in a sense - like the filesystem and so on. And then it can pass files and sockets and whatnot to functions that need access to that stuff.
If we build something like that, we should be able to build something like npm but where you don't need to trust the developers of 3rd party software so much. The current system of trusting everyone with everything is insane.
I couldn't agree with you more, the thing is our underlying security models are protecting systems from their users, but do nothing for protecting user data from the programs they run. Capability based security model will fix that.
3 replies →
> No network access. No filesystem. Nothing.
Ironically, any c++ app I've written on windows does exactly this. "Are you sure you want to allow this program to access networking?" At least the first time I run it.
I also rarely write/run code for windows.
4 replies →
The issue with npm is JS doesn't have a stdlib, so developers need to rely on npm and third party libs even for things stdlib provide in languages like Java, Python, Go, ...
Sure it does. The JS standard library these days is huge. Its way bigger than C, Zig and Rust. It includes:
- Random numbers
- Timezones, date formatting
- JSON parsing & serialization
- Functional programming tools (map, filter, reduce, Object.fromEntries, etc)
- TypedArrays
And if you use bun or nodejs, you also have out of the box access to an HTTP server, filesystem APIs, gzip, TLS and more. And if you're working in a browser, almost everything in jquery has since been pulled into the browser too. Eg, document.querySelector.
Of course, web frameworks like react aren't part of the standard library in JS. Nor should they be.
What more do you want JS to include by default? What do java, python and go have in their standard libraries that JS is missing?
6 replies →
JS has a stdlib, so to say. See nodejs, and Web standard.
And no programming language's stdlib includes e. g. WhatsApp API libraries
Developing in a container might mitigate a lot of issues. Harder to compromise your development machine.
I guess if you ship it you are still passing along contagion
> unless you work for a government contractor where they have strict security policies
... So you're saying there is a blueprint for mitigating this already, and it just isn't followed?
It's more work and more restrictive I suppose. Any business is free to set up jfrog Artifactory and only allow the installation of approved dependencies. And anyone can pull Ironbank images I believe.
1 reply →
Yes, but it requires people. Typically, you identify a package you want (or a new version of a package you want) and you send off a request to a separate security team. They analyze and approve, and the package becomes available in your internal package manager. But this means 1) you need that team of people to do that work, and 2) there's a lot of hurry-up-and-wait involved.
4 replies →
Every docker image specified in a k8s yml or docker-compose file or github action that doesn’t end in :sha256@<hash> (ie specifying a label) is one “docker push” away from a compromise, given that tags/labels are not cryptographically specified. You’re just trusting DockerHub and the publisher (or anyone with their creds) to not rug you.
The industry runs on a lot more unexamined trust than people think.
They’re deployed automatically by machine, which definitionally can’t even give it a second thought. The upstream trust is literally specified in code, to be reused constantly automatically. You could get owned in your sleep without doing anything just because a publisher got phished one day.
That's one reason I barely use any dependencies. I'm forced to use a couple, but I tend to "roll my own," quite a bit.
Well, I should qualify that. I do use quite a few dependencies, but they are ones that I wrote.
Requiring the use of lockfiles and strict adherence to checking updates, also helps. I tend to use dependencies for many things, but ones I've trusted over a long time, I know how they work, often chosen because of how they were implemented, so I can see the updates and review them myself. Scaling up to a team, you make that part of the process whenever you add a new dependencies, and someone's name always have to be "assigned" to a dependency, so people take ownership of the code that gets added. Often people figure out it's not worth it, and figure out a simpler way.
1 reply →
I have to trust the publisher, otherwise I can't update and I have to update because CVE's exist. If we step back, how do I even know that the image blessed with hardcoded hash (doublechecked with the website of whoever is supposed to publish it) isn't backdored now?
Because it has been out and published and used for weeks/months. The longer an artifact is public and in use, the less chance it has of being malicious.
2 replies →
Pinning a GitHub Actions action doesn't prevent the action itself from doing an apt install, npm install or running a Docker image that is not pinned.
It's terrifying because it's true for a majority of developers.
It's also hyperbole
I've worked in plenty of javascript shops and unfortunately its not so far off the mark. Its quite common to see JS projects with thousands of transitive dependencies. I've seen the same in python too.
It's funny how Py has less of this reputation just because the package manager is so broken that you might have a hard time adding so many deps in the first place. (Maybe fixed with uv, but that's relatively new and not default.)
Until you start doing SBOM and seeing what developers are pulling out in the field.
I'm not so sure about that.
I've watched developers judge dependencies by GH stars, and "shiny" quotient.
On a completely unrelated tangent, I remember reading about a "GH Stars as a Service" outfit. I don't see any way that could be abused, though.../s