I bought an Apple II and then a SoftCard. I was trying to learn 'C' and there was a compiler on CPM (Borland) but not on the Apple II.
It is always hard to go back and understand what it was like before an event. Like the Velvet Revolution. But at the time I was working on an IBM 360, mostly doing Fortran for scientists running anemometer simulations. The center for this activity was the person in charge of the 360 who could dole out time on the computer.
The power dynamic was something I did not really notice, but in retrospect this was frustrating for the mathematicians/scientist trying to run simulations. They had to queue up and wait.
Then one day a mathematician brought in an Apple II running VisiCalc. His own personal computer. He ran his simulations on that.
It was like our small world trembled as the tectonic plates of technology shifted. The power shifted just in that one instant. It was cool - how we saw the world changed in one instant.
Steve Wozniak was incredibly foresighted when designing the Apple II, to make sure that expansion cards could disable the default ROMs and even disable the CPU, making this kind of thing possible. The article mentions a chunk of memory "used by peripheral devices"; every expansion card got its own slice of the address space, so you could plug a card in any slot and it would Just Work (maybe you'd have to tell software what slot the card was in). I was very disappointed when I "upgraded" to a 386 and suddenly cards had to be manually configured to non-conflicting IRQs and I/O addresses.
And if something didn't work he included a complete debugger inside "Apple II Machine Language Monitor" in ROM so you could always just disassemble and poke at things, pipe disassembly to the printer, read memory, change code, add own macros to CTRL+Y and rerun stuff. All that without extra software or a massive pile of printed assembly.
from BASIC:
CALL -151 (short for CALL 65385, but BASIC can't handle unsigned INT so that wouldn't work)
F666G
I don't think this is entirely due to Wozniak. Early "home" computer systems were based on connecting cards to a bus (eg the S-100 bus), eg. with one card supporting the CPU, another RAM, a third for disk drive, video card etc, etc. The cards where then memory mapped, presumably you controlled the memory mapping by setting jumpers. (I guess you're saying that Apple II managed this automatically?) Of course the full story might be a bit more complicated: 6502 and 6800 used memory mapped I/O, whereas 8080 (and Z80?) had certain I/O pins coming out of the CPU.
I woldn't go so far as to say it was "prophetic". Contemporary DEC PDP-8 (OMNIBUS) and PDP-11 (UNIBUS / QBUS) systems have a similar approach to "interoperability", where cards for peripherals were also mapped into the machine's address space. It was great that Woz saw the utility of this and brought it into the homebrew/microcomputer design.
Years later. the Apple Dos Compstibility Card (code named Houdini) could do the same thing. It had a 486DX/2-66 and a Sound blaster card on board. By default it shared the host Mac’s memory and you could run both simultaneously. But it wasn’t a great experience on either side. They both ran slower
Alternatively, you could put up to a 32MB RAM SIMM directly on the card.
Now that I think about it, my first Mac did the same thing with the Apple //e card.
I find hosted/hybrid machines particularly fascinating, so I have a 6100/66 DOS with a Houdini II Nubus card and a Education-market LC with the Gemini-based Apple IIe PDS card that I've collected over the years.
They are ...weird... machines.
Both have a (different) extremely bespoke Y cable that are almost required, such that if you find a card separated from the cable, you probably shouldn't pay much for it.
The IIe card has a little lag in the video circuitry compared to the real thing (at least in a first gen LC host, apparently that problem goes away if you stick it in a faster machine with a 24-bit PDS slot).
Coaxing the Houdini II to boot things that are not fundamentally MSDOS is always a good way to throw away a couple hours, but it does a great job of convincing anything up to Win95 that it's a PC. Performance is absurdly better with dedicated RAM.
There are a couple other things in the family, the MacCharlie and AST Mac86/Mac286 products for bolting PC hardware onto various Macs, and the later OrangePC cards (they ended up with the IP from both Apple and ASTs offerings). The apex of "weird hosted computers you can stick in a Mac" are probably the MacIvory (LISP machine on a NuBus card) products, but those are "costly and rare," and are infamously balky even if you do get the hardware (...and I just don't enjoy Lisp).
Sun had a SunPC/SunPCi line in the same vein that will bolt a PC-on-a-card into various SPARC hosts.
Commodore had that first-party Sidecar product with a PC-XT in a box for Amigas, and there was ShapeShifter that would let you fake a Mac semi-native on a 68k Amiga. Likewise DayDream (recently updated into DarkMatter) to run a Mac environment on a 68k NeXT host, both of which "needed" Mac ROMs attached on a dongle for license reasons.
MAE is emulation, but it was an Apple-blessed way to run MacOS hosted on contemporary Unix workstations of the early 90s, which is sort of the opposite. I've managed to prod it onto a (real) PA-RISC/HPUX host, and (emulated, because my SS20 has been super balky as long as I've had it) SPARC/Solaris host just for sport - I'm pretty sure it was built out of decapitated A/UX parts and an emulator when A/UX4 didn't happen.
I'd like to round out my set with at least a IIe with a Premium Softcard IIe at some point, but I'm not willing to pay ebay prices for any of that stuff.
I remember my dad using the Z80 Softcard to run WordStar, which was astonishingly powerful considering how long ago it was king of word processors. I’d be surprised if some of the control keys hadn’t influenced our editors, although as a Vim user I can’t immediately think of any.
It was so much easier to do hardware shenanigans with these old-school chips where the pins on these breadboardable chips actually corresponded to memory address bits.
One of the biggest disappointments of the 8-bit era was the Commodore 128 not being able to use both the 8502 and Z80 CPUs in some kind of coprocessor setup.
I wrote code to do this between a C64 and a 1541 disk drive when I was in high school. It got me to the international science fair and (probably) earned me a full tuition scholarship for undergrad.
That derives logically from the way Commodore implemented disks. If you bought a 1540 or 1541 (or any other Commodore drive) for a C-64 or VIC-20, it had an onboard 6502 to run the disk drive. The interaction between the computer and the disk drive was somewhat similar in concept to fetching a file from a network server.
This could be useful to save on costs in computer labs... my grade school used multiplexer boxes to share a single 1541 across four C-64's.
I mean Wikipedia is referenced and well sourced so it is a perfectly valid source in this day and age. I read papers weekly and they are full of more lies or dishonesty than Wikipedia nowadays where there is a desire to publish often.
The Old New Thing is very much engineering. Any contemporary engineers who don't think they have anything to learn from the experience of the past as recounted in the blog are doomed to repeat the same missteps.
And much as one would hope that Raymond Chen's blogging is holding up any important Microsoft initiatives, I very much doubt that it's much of a distraction for a megacorporation.
Also, you should be able to still search for Raymond Chen's blog process posts, but part of why he has nearly a blog post every work day for quite so many years is that he built up a huge queue, adds to the queue only during relatively free time, and has automated the posting of that queue. It's also seems to relate to why so many are multi-part deep dives. (This post also ends with a tease for the next post.) It was often probably only one root investigative journey related to a Windows support question investigation or user story writeup or documentation was needed to be written or some other TIL rabbit hole was found, but breaking it into multiple parts keeps each part easy to read individually and also keeps the queue full for weeks where things are much busier.
He's been blogging continuously for close to twenty years - he was one of the original wave of Microsoft bloggers (along with Larry Osterman, Michael Kaplan, and several others I can't remember).
It is very much an engineer's engineering blog, and written by someone deeply in the trenches.
Personally, I prefer cool blog posts over "add another Copilot button that does nothing to something that did not require it anyway" or "paper over a perfectly fine API with a newer version that has 60% of the functionality and 120% of the bugs" (which is what Microsoft engineering mostly seems to boil down to these days), but you be you...
Raymond Chen’s blog posts are one of the best things coming out of Microsoft.
As a Unix person for decades, for me it’s great to see his incredibly experienced and insightful view on software development in general and specifically OS development at Microsoft and to read about his experience with all these nice processor architectures no longer supported by NT.
They have been blogging about engineering before blogging was mainstream. You have to subscribe to their CD of MSDN articles to appreciate how info they put out for their products because they used to be developer centric.
I bought an Apple II and then a SoftCard. I was trying to learn 'C' and there was a compiler on CPM (Borland) but not on the Apple II.
It is always hard to go back and understand what it was like before an event. Like the Velvet Revolution. But at the time I was working on an IBM 360, mostly doing Fortran for scientists running anemometer simulations. The center for this activity was the person in charge of the 360 who could dole out time on the computer.
The power dynamic was something I did not really notice, but in retrospect this was frustrating for the mathematicians/scientist trying to run simulations. They had to queue up and wait.
Then one day a mathematician brought in an Apple II running VisiCalc. His own personal computer. He ran his simulations on that.
It was like our small world trembled as the tectonic plates of technology shifted. The power shifted just in that one instant. It was cool - how we saw the world changed in one instant.
Steve Wozniak was incredibly foresighted when designing the Apple II, to make sure that expansion cards could disable the default ROMs and even disable the CPU, making this kind of thing possible. The article mentions a chunk of memory "used by peripheral devices"; every expansion card got its own slice of the address space, so you could plug a card in any slot and it would Just Work (maybe you'd have to tell software what slot the card was in). I was very disappointed when I "upgraded" to a 386 and suddenly cards had to be manually configured to non-conflicting IRQs and I/O addresses.
And if something didn't work he included a complete debugger inside "Apple II Machine Language Monitor" in ROM so you could always just disassemble and poke at things, pipe disassembly to the printer, read memory, change code, add own macros to CTRL+Y and rerun stuff. All that without extra software or a massive pile of printed assembly.
from BASIC:
and the machine is your playground.
I don't think this is entirely due to Wozniak. Early "home" computer systems were based on connecting cards to a bus (eg the S-100 bus), eg. with one card supporting the CPU, another RAM, a third for disk drive, video card etc, etc. The cards where then memory mapped, presumably you controlled the memory mapping by setting jumpers. (I guess you're saying that Apple II managed this automatically?) Of course the full story might be a bit more complicated: 6502 and 6800 used memory mapped I/O, whereas 8080 (and Z80?) had certain I/O pins coming out of the CPU.
Memory mapping happened automatically. Each card was mapped based on the slot it was in. $C000 - $C700 I believe with each slot assigned 256 bytes.
6 replies →
>to make sure that expansion cards could disable the default ROMs and even disable the CPU, making this kind of thing possible.
Today we would call this bus mastering, yes?
Clearly Steve Wozniak was a very unique [technical and geeky] guy at that time. Thinking about interoperability at that time was prophetic.
I woldn't go so far as to say it was "prophetic". Contemporary DEC PDP-8 (OMNIBUS) and PDP-11 (UNIBUS / QBUS) systems have a similar approach to "interoperability", where cards for peripherals were also mapped into the machine's address space. It was great that Woz saw the utility of this and brought it into the homebrew/microcomputer design.
I think it was more driven by his own desire to not limit what future hardware hacking he wanted to do with this computer he just designed.
Cool post from Raymond as usual!
I’d like to add that the hardware for the SoftCard was designed by Tim Paterson at SCP about the same time he was writing the future MS-DOS
Years later. the Apple Dos Compstibility Card (code named Houdini) could do the same thing. It had a 486DX/2-66 and a Sound blaster card on board. By default it shared the host Mac’s memory and you could run both simultaneously. But it wasn’t a great experience on either side. They both ran slower
Alternatively, you could put up to a 32MB RAM SIMM directly on the card.
Now that I think about it, my first Mac did the same thing with the Apple //e card.
I find hosted/hybrid machines particularly fascinating, so I have a 6100/66 DOS with a Houdini II Nubus card and a Education-market LC with the Gemini-based Apple IIe PDS card that I've collected over the years.
They are ...weird... machines.
Both have a (different) extremely bespoke Y cable that are almost required, such that if you find a card separated from the cable, you probably shouldn't pay much for it.
The IIe card has a little lag in the video circuitry compared to the real thing (at least in a first gen LC host, apparently that problem goes away if you stick it in a faster machine with a 24-bit PDS slot).
Coaxing the Houdini II to boot things that are not fundamentally MSDOS is always a good way to throw away a couple hours, but it does a great job of convincing anything up to Win95 that it's a PC. Performance is absurdly better with dedicated RAM.
There are a couple other things in the family, the MacCharlie and AST Mac86/Mac286 products for bolting PC hardware onto various Macs, and the later OrangePC cards (they ended up with the IP from both Apple and ASTs offerings). The apex of "weird hosted computers you can stick in a Mac" are probably the MacIvory (LISP machine on a NuBus card) products, but those are "costly and rare," and are infamously balky even if you do get the hardware (...and I just don't enjoy Lisp).
Sun had a SunPC/SunPCi line in the same vein that will bolt a PC-on-a-card into various SPARC hosts.
Commodore had that first-party Sidecar product with a PC-XT in a box for Amigas, and there was ShapeShifter that would let you fake a Mac semi-native on a 68k Amiga. Likewise DayDream (recently updated into DarkMatter) to run a Mac environment on a 68k NeXT host, both of which "needed" Mac ROMs attached on a dongle for license reasons.
MAE is emulation, but it was an Apple-blessed way to run MacOS hosted on contemporary Unix workstations of the early 90s, which is sort of the opposite. I've managed to prod it onto a (real) PA-RISC/HPUX host, and (emulated, because my SS20 has been super balky as long as I've had it) SPARC/Solaris host just for sport - I'm pretty sure it was built out of decapitated A/UX parts and an emulator when A/UX4 didn't happen.
I'd like to round out my set with at least a IIe with a Premium Softcard IIe at some point, but I'm not willing to pay ebay prices for any of that stuff.
I remember my dad using the Z80 Softcard to run WordStar, which was astonishingly powerful considering how long ago it was king of word processors. I’d be surprised if some of the control keys hadn’t influenced our editors, although as a Vim user I can’t immediately think of any.
Turbo Pascal and other Borland products used to use keys based on WordStar. These days JOE (Joe's Own Editor) still uses a similar keyset.
> These days JOE (Joe's Own Editor) still uses a similar keyset.
joe is definitely among the easiest CLI/TUI editors there are.
2 replies →
WordStar was basically all we needed, and it still is.
Imagine if you had something that small and powerful today.
https://archive.org/details/wordstar_202310
> WordStar was basically all we needed, and it still is. > > Imagine if you had something that small and powerful today.
I completely agree with the first part. But why do you think we don't have that today, if we choose to do so?
3 replies →
It was so much easier to do hardware shenanigans with these old-school chips where the pins on these breadboardable chips actually corresponded to memory address bits.
Ooh, the Z80 refresh line. You can have a program that writes to the R register in a loop and prevent the refreshes from happening.
This is great, I’m building new machines on the 6502 and can use this. Thanks.
I wonder if anyone ever used the Z80 Softcard or one of its many clones to run something different than CP/M?
I got MP/M working on the softcard back in the day. Never really had an application for it though.
One of the biggest disappointments of the 8-bit era was the Commodore 128 not being able to use both the 8502 and Z80 CPUs in some kind of coprocessor setup.
The Commodore 128D has two 6502 CPUs. One is in the floppy drive and you can run software on it while the main 6502 ran something else.
I wrote code to do this between a C64 and a 1541 disk drive when I was in high school. It got me to the international science fair and (probably) earned me a full tuition scholarship for undergrad.
That derives logically from the way Commodore implemented disks. If you bought a 1540 or 1541 (or any other Commodore drive) for a C-64 or VIC-20, it had an onboard 6502 to run the disk drive. The interaction between the computer and the disk drive was somewhat similar in concept to fetching a file from a network server.
This could be useful to save on costs in computer labs... my grade school used multiplexer boxes to share a single 1541 across four C-64's.
2 replies →
128D had 3 cpus :)
"According to Wikipedia..." aargh Wikipedia is not the source!
Maybe in 2005, but in 2025, Wikipedia is more reliably accurate than many more-official-sounding sources.
I mean Wikipedia is referenced and well sourced so it is a perfectly valid source in this day and age. I read papers weekly and they are full of more lies or dishonesty than Wikipedia nowadays where there is a desire to publish often.
Would be cool if Microsoft would focus on engineering instead of blog posts
The Old New Thing is very much engineering. Any contemporary engineers who don't think they have anything to learn from the experience of the past as recounted in the blog are doomed to repeat the same missteps.
And much as one would hope that Raymond Chen's blogging is holding up any important Microsoft initiatives, I very much doubt that it's much of a distraction for a megacorporation.
Also, you should be able to still search for Raymond Chen's blog process posts, but part of why he has nearly a blog post every work day for quite so many years is that he built up a huge queue, adds to the queue only during relatively free time, and has automated the posting of that queue. It's also seems to relate to why so many are multi-part deep dives. (This post also ends with a tease for the next post.) It was often probably only one root investigative journey related to a Windows support question investigation or user story writeup or documentation was needed to be written or some other TIL rabbit hole was found, but breaking it into multiple parts keeps each part easy to read individually and also keeps the queue full for weeks where things are much busier.
RE "....Any contemporary engineers who don't think they have anything to learn from the experience of the past....." 100% correct
He's been blogging continuously for close to twenty years - he was one of the original wave of Microsoft bloggers (along with Larry Osterman, Michael Kaplan, and several others I can't remember).
It is very much an engineer's engineering blog, and written by someone deeply in the trenches.
Personally, I prefer cool blog posts over "add another Copilot button that does nothing to something that did not require it anyway" or "paper over a perfectly fine API with a newer version that has 60% of the functionality and 120% of the bugs" (which is what Microsoft engineering mostly seems to boil down to these days), but you be you...
They don't have an engineering problem, they have a management problem which ruins and obstructs anything good their engineers might try to make.
Raymond Chen’s blog posts are one of the best things coming out of Microsoft.
As a Unix person for decades, for me it’s great to see his incredibly experienced and insightful view on software development in general and specifically OS development at Microsoft and to read about his experience with all these nice processor architectures no longer supported by NT.
They have been blogging about engineering before blogging was mainstream. You have to subscribe to their CD of MSDN articles to appreciate how info they put out for their products because they used to be developer centric.