← Back to context

Comment by PaulHoule

1 year ago

I find it kind of amusing that the dynamic configuration problem of hardware is so tough and think about the old mainframe and minicomputer OS of the 1970s which avoided all that by starting out with some configuration that supported one terminal and limited storage devices and would recompile the OS for the exact hardware configuration of the machine and print it to a paper tape or magnetic tape and they'd boot off that. Thus you had a "systems programmer" at every mainframe installation.

That part of the industry got into dynamic configuration to support hot plugging and generally being able to change the hardware configuration without causing downtime.

> which avoided all that by starting out with some configuration that supported one terminal and limited storage devices and would recompile the OS for the exact hardware configuration of the machine and print it to a paper tape or magnetic tape and they'd boot off that.

Not even. The OEM-shipped machine-specific bootstrap tape (i.e. the one that "supported one terminal and limited storage devices") was still used for initial system bringup, even after you recompiled your userland software distribution of choice for your computer's weird ISA and wrote it out to a tape. The OEM-shipped machine-specific bootstrap tape got mounted on /; brought up the system just enough to mount other devices; and then the userland software distribution got mounted to /usr.

(Back then, you wouldn't want to keep running things from the machine-specific bootstrap tape — the binaries were mostly very cut-down versions, of the kind that you could punch in from panel toggle switches in a pinch. You couldn't unspool the tape, because the kernel and some early daemons were still executing from the tape; but you wouldn't want anything new to execute from there. Thus $PATH. In /usr/bin you'd find a better shell; a better ls(1); and even what you'd think of today as rather "low-level" subsystems — things like init(8). $PATH was /usr/bin:/bin because "once you have /usr/bin, you don't want to be invoking any of those hyperminimal /bin versions of anything any more; you want the nice convenient /usr/bin ones.")

Ah back when the whole supply chain had a single manufacturer and no one worried about whether someone might want to put in - say - two video cards or the like.

Apple still kind of exists in this space.

  • Ironically, Apple implemented dynamic hardware configuration long before it was a standard feature in PC platforms.

    I was tempted to jump on the "two video cards" example, but the original IBM PC could support both a CGA (for color) and MDA (monochrome, sharper text) in the same host. I never did that myself, but every card I did use required you to flip switches or jumpers on each ISA board to configure its interrupts and memory address of its I/O ports.

    Apple adopted NuBus for its Macintosh expansion platform. Boards were plug and play, automatically configured. Of course, the hardware required on the NuBus card to support this functionality was the better part of a whole separate Mac in its own right; the hardware dev kit cost $40,000.

    Two video cards in a Mac just worked.

    (Of course, I took your comment to refer to hardware less than 20 years old. But even now, there's dynamic hardware. Apple loved Thunderbolt because they wanted external expansion cards over a wire.)

  • Wasn't like that all with DEC and I don't think so with IBM mainframes either.

    It was common for DEC systems to have custom Unibus cards

    https://en.wikipedia.org/wiki/Unibus

    as these were really easy to make. They dealt with them by building custom drivers right into the OS when they build an OS image.

    Circa 2002 a friend of mine developed custom printer interfaces and drivers for IBM z because IBM's printer interface couldn't support the high rate of printing that New York state needed to satisfy the mandate that any paperwork could be turned around in 2 days or less.

    Whatever you say about NY it is impressive how fast you get back your tax returns, driver license, permit to stock triploid grass carp or anything routine like that.

    • But it also meant that release of a new computer often required new OS release, with DEC often patch releases that added just enough code to run the devices included in new computer, because the older versions would at best boot into something unusable.

      As for IBM mainframes, the list of devices directly attachable at OS level is quite small, and even then application with appropriate privileges could directly send control words to a channel. That said, things like printers would probably be intermediated by communication controllers translating from channel interface to serial lines.

The PC is really incredibly unique as a computing platform in how open to third-party extension and customization it ended up becoming (even though it was definitely not IBM's intention!) This has mostly been very good for the consumer, but the combinatorial explosion of different hardware combinations was for a long time a compatibility nightmare, and to some extent still is.