← Back to context

Comment by bane

4 months ago

Containers happened because nobody can be bothered to build an entire application into a single distributable executable anymore - heck even the tooling barely exists anymore. But instead of solving problems like dependency management and linking, today's engineers simply build piles of abstraction into the problem space until the thing you want to do more than anything (i.e. execute an application) becomes a single call.

Of course you now need to build and maintain those abstract towers, so more jobs for everybody!

by "today's engineers", do you mean "2001 engineers"?

That's when sbuild[0], a tool to build deb packages in containers, was created. It was pretty innovative in that it started from clean container every time, and thus would build deb in a reliable way even if user's machine had some funky dependencies installed.

(Note that was schroot containers, docker did not exist back then)

[0] https://metadata.ftp-master.debian.org/changelogs//main/s/sb...

this is what happens when hw is too cheap

  • You sure? Which hardware?

    Put another way: stuff like Electron makes a pretty good case for the "cheap hardware leads to shitty software quality/distribution mechanisms" claim. But does Docker? Containers aren't generally any more expensive in hardware other than disk-space to run than any other app. And disk space was always (at least since the advent of the discrete HDD) one of the cheapest parts of a computer to scale up.

    • If you go back to the Sun days, you literally could not afford enough servers to run one app per server so instead you'd hire sysadmins to figure out how to run Sendmail and Oracle and whatever on one server without conflicting. Then x86/Linux 1Us came out and people started just running one app per server ("server sprawl") which was easy because there was nothing to conflict. This later became VM sprawl and containers were an optimization on that.

      1 reply →