← Back to context

Comment by cosmic_cheese

9 days ago

For me the interesting alternate reality is where CPUs got stuck in the 200-400mhz range for speed, but somehow continued to become more efficient.

It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.

I loved System 7 for its simplicity yet all of the potential it had for individual developers.

Hypercard was absolutely dope as an entry-level programming environment.

  • The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization thanks to its extension and control panel based architecture. Sure, it was a security nightmare, but there was practically nothing that couldn’t be achieved by installing some combination of third party extensions.

    Even modern desktop Linux pales in comparison because although it’s technically possible to change anything imaginable about it, to do a lot of things that extensions did you’re looking at at minimum writing your own DE/compositor/etc and at worst needing to tweak a whole stack of layers or wade through kernel code. Not really general user accessible.

    Because extensions were capable of changing anything imaginable and often did so with tiny-niche tweaks and all targeted the same system, any moderately technically capable person could stack extensions (or conversely, disable system-provided ones which implemented a lot of stock functionality) and have a hyper-personalized system without ever writing a line of code or opening a terminal. It was beautiful, even if it was unstable.

    • I’m not too nostalgic for an OS that only had cooperative scheduling. I don’t miss the days of Conflict Catcher, or having to order my extensions correctly. Illegal instruction? Program accessed a dangling pointer? Bomb message held up your own computer and you had to restart (unless you had a non-stock debugger attached and can run ExitToShell, but no promises there.)

      2 replies →

    • > The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization

      A point for discussion is whether image-based systems are the same kind of thing as OSes where system and applications are separate things, but if we include them, Smalltalk-80 is better in that regard. It doesn’t require you to reboot to install a new version of your patch (if you’re very careful, that’s sometimes possible in classic Mac OS, too, but it definitely is harder) and is/has an IDE that fully supports it.

      Lisp systems and Self also have better support for it, I think.

      3 replies →

    • > Not really general user accessible.

      Writing a MacOS classic extension wasn’t exactly easy. Debugging one could be a nightmare.

      I’m not sure how GTK themes are done now, but they used to be very easy to make.

      2 replies →

I sometimes drop by cpu down to the 400Mhz-800Mhz range. 400 is rough. 800, not so bad. It runs fine, with something like i3 or sway.

If we really got stuck in the hundreds of MHz range, I guess we’d see many-core designs coming to consumers earlier. Could have been an interesting world.

Although, I think it would mostly be impossible. Or maybe we’re in that universe already. If you are getting efficiency but not speed, you can always add parallelism. One form of parallelism is pipelining. We’re at like 20 pipeline stages nowadays, right? So in the ideal case if we weren’t able to parallelize in that dimension we’d be at something like 6Ghz/20=300Mhz. That’s pretty hand-wavey, but maybe it is a fun framing.

The alternative reality I wish we could move to, across the universe, is the one where SGI were the first to build a titanium laptop and became the worlds #1 Unix laptop vendor ..

  • I love the IRIX look, but they’d need to update it past the 1990s. It’d look very dated to current audiences.

    • NextStep looked pretty dated too, but it went through a nice evolution to bring it up to modern design standards .. if SGI had made that laptop and increased their marketshare I'm pretty sure Irix would've gotten a face-lift.

      Anyway, its all about that alternative-universe, where the success of the SGI tiBook has everyone running Irix in their pockets ..

      3 replies →

Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently

  • This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.

    Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.

  • This was the Amiga. Custom coprpcessors for sound, video, etc.

    • Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.

      Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.

      1 reply →

  • This is what the Mac effectively does now - background tasks run on low-power cores, keeping the fast ones free for the interactive tasks. More specialised ARM processors have 3 or more tiers, and often have cores with different ISAs (32 and 64 bit ones). Current PC architectures are already very distributed - your GPU, NIC/DPU, and NVMe SSD all run their own OSs internally, and most of the time don’t expose any programmability to the main OS. You could, for instance, offload filesystem logic or compression to the NVMe controller, freeing the main CPU from having to run it. Same could be done for a NIC - it could manage remote filesystem mounts and only expose a high-level file interface to the OS.

    The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.

The GameBoy Advance could run 2D games (and some 3D demos) on 2 AA batteries for 16 hours. I wonder if we could get something more efficient with modern tech? It seems research made things faster but more power hungry. We compensate with better batteries instead. I guess we can and it's a design goal problem, I also do love a screen with backlight.

  • > It seems research made things faster but more power hungry

    No, modern CPUs are far more power efficient for the same compute.

    The primary power draw in a simple handheld console like would be the screen and sound.

    Putting an equivalent MCU on a modern process into that console would make the CPU power consumption so low as to be negligible.

    • Yes; yet... I thought the efficiency per compute has to do more with the nm process shrinking the die than anything else. That and power use is divided by so many more instructions per second

My alternate reality "one of these days" projects is to have a RISC-V RV32E core on a small FPGA (or even emulated by a different SOC) that sits on a 40- or 64-pin DIP carrier board, ready to be plugged into a breadboard. You could create a Ben Eater-style small computer around this, with RAM, a UART, maybe something like the VERA board from the Commander X16...

It would probably need a decent memory controller, since it wouldn't be able to dedicate 32 pins for a data bus, loads and stores would need to be done wither 8 or 16 bits at a time, depending on how many pins you want to use for that..

  • Have you thought about building a RISC-V “fantasy computer” core for the MiSTer FPGA platform? https://github.com/MiSTer-devel/Wiki_MiSTer/wiki

    From a software-complexity standpoint, something like 64 MiB of RAM possibly even 32 MiB for a single-tasking system seems sufficient.

    Projects such as PC/GEOS show that a full GUI OS written largely in assembly can live comfortably within just a few MiB: https://github.com/bluewaysw/pcgeos

    At this point, re-targeting the stack to RISC-V is mostly an engineering effort rather than a research problem - small AI coding assistants could likely handle much of the porting work over a few months.

  • The really cool thing about RISC-V is that you can design your own core and get full access to a massive software ecosystem.

    All you need is RV32I.

There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.

What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.

The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.

  • > What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free.

    I dunno if it was cheap RAM or just developer convenience. In one of my recent comments on HN (https://news.ycombinator.com/item?id=46986999) I pointed out the performance difference in my 2001 desktop between a `ls` program written in Java at the time and the one that came with the distro.

    Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1]. There was simply no way that companies would continue spending 6 - 8 times more on hardware for a specific workload.

    C++ would have been the enterprise language solution (a new sort of hell!) and languages like Go (Native code with a GC) would have been created sooner.

    In 1998-2005, computer speeds were increasing so fast there was no incentive to develop new languages. All you had to do was wait a few months for a program to run faster!

    What we did was trade-off efficiency for developer velocity, and it was a good trade at the time. Since around 2010 performance increases have been dropping, and when faced with stagnant increases in hardware performance, new languages were created to address that (Rust, Zig, Go, Nim, etc).

    -------------------------------

    [1] It took two decades of constant work for those high-dev-velocity languages to reach some sort of acceptable performance. Some of them are still orders of magnitude slower.

    • > Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1].

      I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.

      Had processor speed and memory advanced slower, I don't think you see these languages go away, I see they just end up being used for different things or in different ways.

      JavaOS, in particular, probably would have had more success. Seeing an entire OS written in and for a language with a garbage collector to make sure memory isn't wasted would have been much more appealing.

      3 replies →

    • As you say, the trade-off is developer productivity vs resources.

      If resources are limited, that changes the calculus. But it can still make sense to spend a lot on hardware instead of development.

  • Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.