← Back to context

Comment by lproven

4 months ago

> UNIX could have done with a re-thinking of how you deal with software, but that never happened.

I think it did, but the Unix world has an inherent bad case of "not invented here" syndrome, and a deep cultural reluctance to admit that other systems (OSes, languages, and more) do some things better.

NeXTstep fixed a big swath of issues (in the mid-to-late 1980s). It threw out X and replaced it with Display Postscript. It threw out some of the traditional filesystem layout and replaced it with `.app` bundles: every app in its own directory hierarchy, along with all its dependencies. Isolation and dependency packaging in one.

(NeXT realised this is important but it has to be readable and user-friendly. It replaces the traditional filesystem with something more readable. 15Y later, Nix realised the same lesson, but forgot the 2nd, so it throws out the traditional FHS and replaces it with something less readable, which needs software to manage it. The NeXT way means you can install an app with a single `cp` command or one drag-and-drop operation.)

Some of this filtered back upstream to Ritchie, Thompson and Pike, resulting in Plan 9: bin X, replace it with something simpler and filesystem-based. Virtualise the filesystem, so everything is in a container with a virtual filesystem.

But it wasn't Unixy enough so you couldn't move existing code to it. And it wasn't FOSS, and arrived at the same time as a just-barely-good-enough FOSS Unix for COTS hardware was coming: Linux on x86.

(The BSDs treated x86 as a 2nd class citizen, with grudging limited support and the traditional infighting.)

I can’t remember NeXTStep all that well anymore, but the way applications are handled in Darwin is a partial departure from the traditional unix way. Partial, because although you can mostly make applications live in their own directory, you still have shared, global directory structures where app developers can inflict chaos. Sometimes necessitating third party solutions for cleaning up after applications.

But people don’t use Darwin for servers to any significant degree. I should have been a bit more specific and narrowed it down to Linux and possibly some BSDs that are used for servers today.

I see the role of Docker as mostly a way to contain the “splatter” style of installing applications. Isolating the mess that is my application from the mess that is the system so I can both fire it up and then dispose of it again cleanly and without damaging my system. (As for isolation in the sense of “security”, not so much)

  • > a way to contain the “splatter” style of installing applications

    Darwin is one way of looking at it, true. I just referred to the first publicly released version. NeXTstep became Mac OS X Server became OS X became macOS, iOS, iPadOS, watchOS, tvOS, etc. Same code, many generations later.

    So, yes, you're right, little presence on servers, but still, the problems aren't limited to servers.

    On DOS, classic MacOS, on RISC OS, on DR GEM, on AmigaOS, on OS/2, and later on, on 16-bit Windows, the way that you install an app is that you make a directory, put the app and its dependencies in it, and maybe amend the system path to include that directory.

    All single-user OSes, of course, so do what you want with %PATH% or its equivalent.

    Unix was a multi-user OS for minicomputers, so the assumption is that the app will be shared. So, break it up into bits, and store those component files into the OS's existing filesystem hierarchy (FSH). Binaries in `/bin`, libraries in `/lib`, config in `/etc`, logs and state in `/var`, and so on -- and you can leave $PATH alone.

    Make sense in 1970. By 1980 it was on big shared departmental computers. Still made sense. By 1990 it was on single-user workstations, but they cost as much as minicomputers, so why change?

    The thing is, the industry evolved underneath. Unix ended up running on a hundred million times more single-user machines (and VMs and containers) than multiuser shared hosts.

    The assumptions of the machine being shared turned out to be wrong. That's the exception, not the rule.

    NeXT's insight was to only keep the essential bits of the shared FSH layout, and to embed all the dependencies in a folder tree for each app -- and then to provide OS mechanisms to recognise and manipulate those directory trees as individual entities. That was the key insight.

    Plan 9 virtualised the whole FSH. Clever but hard to wrap one's head around. It's all containers all the way down. No "real" FSH.

    Docker virtualises it using containers. Also clever but in a cunning-engineer's-hacky-kludge kind of way, IMHO.

    I think GoboLinux maybe made the smartest call. Do the NeXT thing, junk the existing hierarchy -- but make a new more-readable one, with the filesystem as the isolation mechanism, and apply it to the OS and its components as well. Then you have much less need for containers.