← Back to context

Comment by helf

3 years ago

The fact so much hardware these days is running a full real-time OS all the time annoys me. I know it is normal and understandable but everything is such a black box and it has already caused headaches (looking at you, Intel).

This isn't even that new of a thing. The floppy disk drive sold for the Commodore 64 included it's own 6502 CPU, ROM, and RAM. This ran its own disk operating system[1]. Clever programmers would upload their own code to the disk drive to get faster read/writes, pack data more densely on the disk, and even copy protection schemes that could validate the authenticity of a floppy.

1: https://en.wikipedia.org/wiki/Commodore_DOS

  • And all that engineering resulted in a floppy drive that was slower and more expensive than comparable units for other home computers. I'm not sure if there is a lesson there...

    • Well, it was slower due to a hardware problem. Basically, the hardware serial device had a bug which required a bit bang comms channel to the disk drive. Doing that amidst the sometimes aggressive video DMA is what caused all the slowdowns.

      Back in the day I owned machines that did it both ways, but not a C64. My Atari computer also had a smart disk drive. Worked over something Atari called SIO, which is an early ancestor of modern USB. Back then, the Atari machine was device independent and that turned out to be great engineering!

      Today we have Fuji Net devices that basically put Atari and other computers on the Internet, even to the point of being able to write BASIC programs that do meaningful things online.

      The C64 approach was not much different, working via RS-232. But for a bug, it would have performed nicely.

      Now, my other machine was an Apple ][, and that disk was all software. And it was fast! And being all software meant people did all sorts of crazy stuff on those disk drives ranging from more capacity to crazy copy protection.

      But... That machine could do nothing else during disk access.

      The Atari and C64 machines could do stuff and access their disks.

      Today, that Fuji Net device works via the SIO on the Atari, with the Internet being the N: device! On the Apple, it works via the SmartPort, which worked with disk drives that contained? Wait for it!!

      A CPU :)

      Seriously, your point is valid. But, it's not really valid in the sense you intended.

      4 replies →

    • The slowness was due to a hardware bug in the 6522 VIA chip. The shift register (FIFO) would lock up randomly. Since this couldn't be fixed before the floppy drive needed to be shipped, they had the 6502 CPU bit-bang the IEC protocol, which was slower. The hardware design for the 154x floppy drive was fine, and some clever software tricks allow stock hardware to stream data back to the C64 and decode the GCR at the full media rate.

      https://www.linusakesson.net/programming/gcr-decoding/index....

    • Probably not a fair comparison in some ways but this reminds me of that story of Woz making a disk drive controller with far fewer chips by being clever and thoughtful about it all. I’m probably misremembering this.

      6 replies →

    • The 1541 was slow because the c64’s serial bus was slow. Data was clocked over the bus 1 bit at a time. Various fastloaders sped up the data rate by reusing the clock line itself as a data line (2 bits at a time), later HW adapters adder parallel port or even usb to overcome the serial bus bottleneck.

      Basically commodore was gonna use an ieee-488 bus for the drive and then decided it was too expensive late in the design and switched to this hacks serial bus that bottlenecked everything.

    • There’s more to life and computation than cycles and read/write speed. A generation of engineers began their journey doing these “useless things”

    • The 1541 was set to be a highly capable and performant machine, but an interface/design bug held it back and delivered dismal performance whenever connected to the C64. They tried to fix it but it couldn't be rescued, so speed freaks needed to wait for the 1570 series.

      1 reply →

  • Oh I know it’s been a thing forever. Hell, my NeXT Cube with its NeXTDimension display board was such. The NeXTDimension board ran its own entire stripped down OS. It used an Intel i860 and a Mach kernel…. It also was massively underutilized. If NeXT had did a bit more leg work and made the actual Display PS server run entirely on the board it would have been insane. But the 68K still did everything.

    … I miss my NeXTs..

  • Yes, but ... Commodore did this because they had incompetent management. They shipped products (VIC-20, 1540) with hardware defect in one of the chips (6522), chip they manufactured themselves. The kicker is

    - C64 shipped with 6526, a fixed version of 6522

    - C64 is incompatible with 1540 anyway

    They crippled C64 for no reason other than to sell more Commodore manufactured chips inside a pointless box. C128 was similar trick of stuffing C64 with garbage leftover from failed projects and selling computer with 2 CPUs and 2 graphic chips at twice the price. Before slow serial devices they were perfectly capable of making fast and cheaper to manufacture floppies for PET/CBM systems.

  • In the era of CP/M machines, the terminal likely had a similar CPU and RAM to the computer running the OS too. So you had one CPU managing the text framebuffer and CRT driver, connected to one managing another text framebuffer and application, connected to another one managing the floppy disk servos.

    • I guess I should have clarified more: I dislike everything running entirely separate OSes that you have no control over at all and are complete black boxes.

      The fact they are running entire OSes themselves isn’t that big of a deal. I just hate having no control.

  • Oh God, the 1541 ran soooo hot, hotter than the C64 itself. I remember using a fan on the drive during marathon Ultima sessions. The 1571 was so much cooler and faster.

There's this great USENIX talk by Timothy Roscoe [1], which is part of the Enzian Team at ETH Zürich.

It's about the dominant unholistic approach to modern operating system design, which is reflected in the vast number of independent, proprietary, under-documented RTOSes running in tandem on a single system, and eventually leading to uninspiring and lackluster OS research (e.g. Linux monoculture).

I'm guessing that hardware and software industries just don't have well-aligned interests, which unfortunately leaks into OS R&D.

[1] https://youtu.be/36myc8wQhLo

  • I think making it harder to build an OS by increasing its scope is not going to help people to build Linux alternatives.

    As for the components, at least their interfaces are standardized. You can remove memory sticks by manufacturer A and replace them with memory sticks from manufacturer B without problem. Same goes for SATA SSDs or mice or keyboards.

    Note that I'm all in favour of creating OSS firmware for devices, that's amazing. But one should not destroy the fundamental boundary between the OS and the firmware that runs the hardware.

    • Building an OS is hard. There's no way around its complexity. But closing your eyes and pretending everything is a file is a security disaster waiting to happen (actually, happening every day).

      And furthermore, OS research is not only about building Linux alternatives. There are a lot of operating systems that have a much narrower focus than full-blown multi-tenant GPOS. So building holistic systems with a narrower focus is a much more achievable goal.

      > As for the components, at least their interfaces are standardized

      That's not true once you step into SoC land. Components are running walled-garden firmware and binary blobs that are undocumented. There's just no incentive to provide a developer platform if no one gives a shit about holistic OSes in the first place.

      9 replies →

Every cell in your body is running a full blown OS fully capable of doing things that each individual cell has no need for. It sounds like this is a perfectly natural way to go about things.

  • Organic units should not be admired for their design.

    DNA is the worst spaghetti code imaginable.

    The design is such a hack, that it's easier to let the unit die and just create new ones every few years.

    • "Let the unit die and just create new ones every few years" is a brilliant solution to many issues in complex systems. Practically all software created by humans behaves the same way - want a new version of your browser or a new major version of your OS kernel or whatever else - you have to restart them.

      1 reply →

    • "The creatures outside looked from DNA to k8s YAML, and from k8s YAML to DNA, and from DNA to k8s YAML again; but already it was impossible to say which was which."

    • Death isn’t a solution to maintenance issues, there are some organisms including animals that live many hundreds of years and possibly indefinitely. The reason seems to be to increase the rate of iterations, to keep up the pace of adaptation and evolution.

    • its a pretty amazing hack though

      the human body can scale from 1 cell to several trillion without going down for maintenance even once all while differentiating to different functions

      it can take a high level of damage and heal without needing a shutdown as well, most software crashes completely at the first exception

      cells give you that highly scalable and fault tolerant system that we all want

      2 replies →

    • Poor comparison - DNA is compiled assembly language code. It is meant to be spaghetti to save space and reuse proteins for multiple functions. In that regard it’s the most efficient compiler in the universe.

    • And it’s still more adaptable/robust/intelligent than almost any system we’ve built so far.

It's interesting that microkernels didn't "win" at the OS layer, but they kind of seem to have "won" one layer down.

  • I think IBM's IO channels would like a word... it's been like this for most of computing.

The real-time nature is not what makes it closed though. It's simply that it's been designed to be closed.

For example, Intel's ME could be a really useful feature if we could do what we want with it. Instead they lock it down so it's just built-in spyware.

  • Isn’t the primary purpose of the ME to run DRM and back door the system? How would it be useful at all open source? People would just turn it off entirely.

    • I could imagine a super low power coprocessor that's always on would be a real useful tool. It could check my email and other low power tasks.

      And remote management isn't bad if it's entirely under my control. It's the closed nature that makes me distrust it.

      5 replies →

I don't know. This sounds very computer-sciency-ish. We build smaller tools to help build big things. Now the big things are so good and versatile we can replace our smaller tools with the big things too. With the more powerful tools, we can build even bigger things. It is just compiler bootstrapping happening in hardware world.

  • The problem is that there's so much unexplored territory in operating system design. "Everything is a file" and the other *nix assumptions are too often just assumed to be normal. So much more is possible.

    • Possible, but apparently rarely worth the extra effort or complexity to think about.

      The Unix ‘everything is a file’ has done well because it works pretty well.

      It also isn’t generally a security issue, because it allows application of the natural and well developed things we use for files (ACLs, permissions, etc), without having to come up with some new bespoke idea, with all it’s associated gaps, unimplemented features, etc.

      Hell, most people don’t even use posix ACLs, because they don’t need it.

Same. It's not about the principle, but that generally these OSes increase latency etc. There's so much you can do with interrupts, DMA, and targetted code when performance is a priority.

I sometimes wonder about how fast tings could go if we ditch the firmware, and also just bake a kernel / os right into the silicon. Not like all the subsystems which run their own os/kernels, but really just cut every layer, and have nothing in between.

  • You'd find yourself needing to add more CPUs to account for all the low level handling that is done by various coprocessors for you, eating into your compute budget, especially with high interrupt ratio as you wouldn't have it abstracted and batched in the now missing coprocessors

  • It'd be slower. These coprocessor OSes are there to improve performance in the first place.

    (Especially because wall clock time is not the only kind of performance that matters.)

Technically it is not a real time OS. There are very few OSs that have this moniker (vxworks, qnx, etc)

  • "Real-time" isn't a trademark, you can assign it to other things if they meet the typical guarantees of "real-time".