Comment by ahartmetz
21 hours ago
That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!
21 hours ago
That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!
Also strikes me as not fully understanding what exactly docker is doing. In reference to building everything in a docker image:
"Unfortunately, this will rebuild everything from scratch whenever there's any change."
In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)
It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.
Thank you! I got a couple minutes in and was confused as hell. There is no reason to do the builds in the container.
Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.
I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.
It's pretty clear that this whole project was god-tier level procrastination so I wouldn't worry too much about the details. The original stated problem could have been solved with a 5-line shell script.
Half the point of containerization is to have reproducible builds. You want a build environment that you can trust will be identical 100% of the time. Your host machine is not that. If you run `pacman -Syu`, you no longer have the same build environment as you did earlier.
If you now copy your binary to the container and it implicitly expects there to be a shared library in /usr/lib or wherever, it could blow up at runtime because of a library version mismatch.
Nobody is suggesting to copy the binary to the Docker container.
When developing locally, use `cargo test` in your cli. When deploying to the server, build the Docker image on CI. If it takes 5 minutes to build it, so be it.
From the article, the goal was not to simplify, but rather to modernize:
> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.
Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.
So put the binary in the container. Why does it have to be compiled within the container?
That is what they are doing. It's a 2 stage Dockerfile.
First stage compiles the code. This is good for isolation and reproducibility.
Second stage is a lightweight container to run the compiled binary.
Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.
Containers are good practice for CI/CD anyway.
5 replies →
Mightily resisting the urge to be flippant, but all of those benefits were achieved before Docker.
Docker is a (the, in some areas) modern way to do it, but far from the only way.
Increased security compared to bare hardware, lower than VMs. Also, lower than Jails and RKT (Rocket) which seems to be dead.
> process isolation, increased security
no, that's sandboxing.
Exactly. I immediately thought of the grug brain dev when I read that.