Around 2003 there was a demo CD. This brings up a nice system with a GUI and browser.[1][2]
QNX started closed source with a free version, went open source, went closed source, went open source after an acquisition, and then went closed source when RIM (Blackberry) acquired them. Then RIM dropped the GUI to focus on whatever it is Blackberry still does.
As I once told one of their sales execs, "quit worrying about people pirating your system and worry about people ignoring it". During the first free version period, people were porting open source software such as GCC, Eclipse, and browsers to QNX. With all the licensing changes, the open source community got fed up with QNX and stopped making versions for it.
We used QNX for our DARPA Grand Challenge vehicle. All our desktop machines ran QNX, and we could run the real-time program on them as well as the vehicle. The real time features were so good that we could have the entire real-time vehicle system running, at hard real time priority, and run a browser or compile without missing a time check.
The final fate of self-hosted QNX was super unfortunate, and poor decision making on the part of RIM/Blackberry. QNX completely dropped support for self-hosted QNX Neutrino development after 6.5 [0], and stopped distributing installation media ISOs with 6.6/7.0. Instead you needed to develop everything on macOS/Windows/Linux hosts, make custom specialized images using their published BSPs (Build Support Packages).
So no neutrino hosted compiler toolchain, no desktop. Oh yeah, they also completely killed off their full GUI desktop, the Photon microGUI in QNX 6.6. There was even a working port of Mozilla Firefox to it at some point. You could use all this freely with a hobbyist/non-commercial license in the early 2000s.
As a casual outside observer, it seems to me that dropping self-hosted support was a very sensible move. They probably didn't want to spend resources on PC hardware support, particularly for laptops. And of course, a developer's PC doesn't just run development tools; it also has to handle things like connecting to VPNs and being manageable by company IT departments. Some people need accessibility accommodations (e.g. screen readers, magnifiers, or alternative input methods), and there's no reason to assume that this group doesn't (or couldn't) include some developers of niche embedded systems. The list goes on and on. Even desktop Linux doesn't do a great job on all these things, never mind a niche OS like QNX. So doing cross-development from Windows or macOS makes a lot of sense.
Edit to add: Just had a scary thought. What if Red Hat and whoever else is actually spending money on desktop Linux development applied the same logic to desktop Linux itself that I retroactively applied to self-hosted QNX? After all, non-developers don't use Linux, right? (I'm speculating that they'd make that assumption, not saying it's actually true.) And developers can work with Linux by connecting to a remote machine or running a VM on a "normal" (i.e. Windows or Mac) computer. Is there enough economic incentive to keep maintaining and improving desktop Linux that this won't happen? The death of desktop Linux wouldn't actually hurt me, as I mainly use Windows, but I'd still be sad.
I was using it at that time as my main desktop OS. I ported and wrote several tools of it. The gui itself was simple, nice to use and to program for.
Driver support wasn't extensive, but the available drivers were working fine. With qnxstart.com at the time and all the oss tooling I didn't have anything missing.
I was in love in a way that only beos gave me before.
This was the last commercial/closed-source OS I ever used in no small part due to the license change.
QNX was a true microkernel architecture that worked, and worked well. The basic building block was called IIRC Send/Receive/Reply: Every "system" call looked like a regular function call, but would "Send" a message to a different process, and (usually) suspend the caller; The other process would "Receive", do whatever was requested, and "Reply", at which point control went back (with the response) to the calling process. IIRC it was also possibly to do async calls, but in that case the other process would call ("Send") the response back, rather than "Reply". I might be confusing this with another system though.
device drivers weren't privileged - they were just another process you called into, and could be restarted in the case of a fault (rather than kernel panic or blue screen).
A system that doesn't provide this is not an alternative to QNX; It's just another operating system (which are all, in some ways, alternative to each other and thus QNX, but ...)
1. Nothing. Someone was writing a QNX-type kernel in Rust. What happened to that? There's L4, but it's too low-level. It's more of a hypervisor. People usually run another OS, usually a stripped-down Linux, on top of L4. QNX offers a POSIX API, so you can run applications directly.
In automotive infotainment systems, where QNX was used a lot in the past, it mostly had been replaced with combinations of Linux or Android on the non-realtime critical systems, and smaller microcontrollers which run realtime OS (OSEK and autosar derivates). The latter are usually not open source.
this demo cd is my benchmark of how a computer is supposed to feel like. if UI interactions are in any way slower than what this provides (on, say, 2008-era hardware), it's basically poop
FWIW, Windows is not a true RTOS, but it does get pretty darn close.
The true hallmarks of an RTOS kernel is hard-realtime scheduling usually with round-robin priorities, and support for priority inversion, which is when a low priority process blocks a high priority one, it inherits the high priority temporarily to meet the deadline.
Windows is threads have a priority, with the highest priority thread occupying the CPU - however there's a series of 'hacks' that allow it to emulate real time behavior.
Threads can get a priority boost in some cases, such as the aforementioned priority inversion case, when the user interacts with the program associated with the thread, when the thread hasn't run for a long time etc.
Additionally there's a set of 'real-time' priorities that can preempt all non-realtime priorities and you need admin or kernel access to set this prio level, as these threads will lock up your system because they can't be preempted.
While I wouldn't trust Windows to control an ICBM, but it's good enough at giving resources to user processes so the your UI feels responsive.
Plan 9 has deadline scheduling out-of-the-box for real-time. It runs on x86-64, 386, Arm v7 and AArch64 (And more): http://doc.cat-v.org/plan_9/real_time/ (mostly obsolete but describes the motivation and implementation)
See proc(3) man page for deadline scheduling (towards the bottom real-time i described): http://man.9front.org/3/proc (I always recommend the actively maintained 9front fork)
The best part is you don't need special patches or libraries. You simply configure the process/group by writing messages to the procs ctl file using the command line, a script, or from within your program.
> As I once told one of their sales execs, "quit worrying about people pirating your system and worry about people ignoring it".
That's a very frequent comment about tech companies with underperforming sales.
This is how ARM leapfrogged MIPS around 2010. Their licensing was basicaly "you are buying a USB stick". MIPS on other hand was "pre-pay us $100k just to have our attorney to take a look on if we can sell to you"
In those "deep" tech companies, it's absolutely not unusual to have sales staffed by people with zero background knowledge, but nevertheless star sales professionals.
Exactly my thought when reading the title. I remember it took a bit of time to boot because my floppy drive was glacially slow at seeking, but once there it was incredibly reactive.
As a wow factor it probably comes a close second place to when I got to experience BeOS hands-on (which was like, how is that even possible)
QNX always sounded interesting, until one saw just how much effort the company was going to to prevent people from actually using it. Vaguely recall a story about someone trying to buy 50 licenses from them for a prototype kiosk thing but they wanted something like 1,000 minimum for a reseller account; and that killed the project.
I did some demo projects with QNX in the 90s and I thought it was the best OS ever. Unfortunately trying to license it for use with our company products was a nightmare and after a while I just said the fuck with it.
We used QNX in my last company as the foundation for our router. It was a "tandem" HA system (at least one of our lead architects were formerly from Tandem, the company). It had 2x Control Plane (1 Active, 1 Standby) boards, and 3x Data Plane boards (2 Active, 1 Standby). QNX was an important part of our architecture.
Some features I loved in QNX: process control across the network. I could control processes on any of the processors (running QNX) on any of the boards of the system. Launch a program on a different processor with just the appropriate command prefix (which I forget). Also, driver restart: by the nature of being a microkernel, drivers were "just another process", and if they crashed or hung I could just restart/kill the process. Also, tighter coupling between drivers and files under /dev, unlike whatever Linux is doing, especially for networking devices!
While I'm at it, I want to write down the lifecycle of that company.
The router served as a "security endpoint", meaning it could "terminate" (decode), thousands of IPSec connections. Thus it would serve as the "border router" for a network operator.
The company's big hit was providing this product to NTT Docomo for its LTE infrastructure. NTT had the (turned out unique) architectural challenge where they controlled the base stations, the core network... but not the backhaul (connection between the base stations and core)! The backhaul was on shared leased network. So they needed to encrypt [1], hence the IPSec, and hence the need for a "router" that could receive all these connections and decode them to feed into their Core Network.
I joined the company shortly after they scored that huge contract, when they were flush with money and looking to grow.
NTT Docomo was a pioneer in LTE deployment, so our company tried to sell this operating model to the rest of the world... but no-one took it. Turns out most operators just own their backhaul, so didn't feel the need to encrypt, or at least have the same architecture as NTT.
So our company tried for a while to adapt our router (really, network middle-box, and really, its upgraded next version) for other emerging use-cases, but it was hard to get a grip both in emerging network architectures and against the incumbents (lol the number of times we had bugs with Cisco equipment which we proved was Cisco's fault but nope we just had to work around it).
The company was eventually bought at fire sale price by one that did cheap Software-Defined Networking on commodity hardware. Our expensive router was discontinued.
(Also, fuck Broadcom)
[1] It occurs to me that Snowden's revelations in 2013 happened during my tenure there. However the response of many operators was to have one fat encrypted pipe (which we didn't stand out for) rather than many small encrypted ones (which we did).
(edit: also working with NTT Docomo was another level of reliability requirement compared to the half-assery that was tolerated everywhere else!)
I think I know this company: stoke. mobile backhaul SeGW is a big market, it just stoke didn't make it in that market, and its deployment in DCM was replaced entirely not a few years after.
In high school I'd extensively used Windows from 3.1-98se, Linux (Debian, Mandrake), and dabbled a ton with BeOS and QNX (hampered from making either my main OS only by software support).
BeOS and QNX (Photon) were my two favorite desktop experiences of the bunch. They were so much better than the others—yes, very much including Linux. And BeOS was even at least as "friendly" and polished as Windows was at the time.
Here we are and neither's on the desktop and their closest modern equivalent that is prevalent is probably macOS, which is... fine as a consolation prize, I guess, but I still wish I could see a world where either of those made a real splash in the desktop world (I know QNX wasn't really trying to, but man, it performed so much better as a desktop OS than Windows or Linux).
Do you have experience running Haiku and if so what's the current hardware story like? IOW, could a reasonably determined person get it running as a daily driver on a modern laptop?
There was a very short window of time around 2000 when BeOS was viable as a main OS, at least for a high school kid like me. I think I even got rid of Windows entirely and just had a single BeOS partition for a while. It was sooo fast on my little eMachines computer, which was such a breath of fresh air after having hand me down 386s and such that struggled to boot Windows. The only real trouble I remember was the network stack was kind of buggy and had to be restarted every now and then and I think printing was pretty non-existant.
Likewise, between 1996 and 1999 I used BeOS as my main driver and I felt like a smug time-traveller from the future walking amongst the rubes. “One Processor Per Person Is Not Enough”: how prescient they were! I knew they were right from the first moment I read their slogan.
I have extremely fond memories of using BeOS as a main driver OS back in the late nineties (‘96-‘98/‘99). First I had a BeBox (dual 603e-133MHz) and later a dual PIII-300MHz. The former was definitely my favourite hardware platform (‘exotic’ RISC architecture combined with das blinkenlights) while the latter far outclassed it once I finally sorted out the video-card driver issues). An absolutely splendid experience. To this day I still adore the chiselled looks of NeXTStep and BeOS GUIs from the period, but the added colourful “Nintendo-esque” elements of BeOS graphic design attracted me. Oh and the movable yellow tabs across the windows! I was also getting into amateur astronomy and there was a 3D starchart application that utterly awed me. I never knew that sitting in my bedroom in mid 1998 with a BeBox planning an astrophotography shot while Enigma’s Return To Innocence blaring in the background would become the high water mark memory of my late adolescence.
For quite a while, watching 1997 BeOS demo brought tears to my eyes. It was so sweetly designed in every way. Maybe except the regular multithreading issues. Even the source code, at least the small bit I saw [0], was utterly brilliant.
[0] part of the FS query language, so you could select/filter through file metadata for free.
Well a lot had to be configured especially for non-standard hardware. But generally it was far more stable and snappy than Windows. Also constant reinstalling and rebooting wasn't necessary. The QNX demo was nice but to be fair you couldn't do much with it unless you wrote your own software I guess...
SqueakNOS was a project to build a complete operating system via Squeak. In this way you can quickly hack it. There is a great page about these initiatives here: http://wiki.squeak.org/squeak/5727
Prior to SqueakNOS we implemented this: http://swain.webframe.org/squeak/floppy/ (using Linux and modifying Squeak to work with SVGALib instead of X) in just 900mb inspired in this QNX Demo Disk.
This brings back memories. I remember marveling at this with my best friend when it came out. I got into BeOS around the same time.
I used to use those ad-supported dial up ISP’s and found one that worked with a standard PPP dialer so I didn’t need their software. I remember carrying around the QNX disk and login info so I could get online with basically any computer.
I love how this gets rediscovered by people every few years :)
I still have a floppy set somewhere. I loved it. I ran it in a 486 IBM all in one I had with a compatible NIC for a long time as a conversation piece and guest light surfing machine. Amazing how well it ran up till fairly recently when standards outstripped its browser too much
I was there at the time. I just amuse myself digging up obscure, mostly-forgotten languages and OSes and posting them on HN to blow the kids' minds. :-D
I still have the Pentium box I used to run it on. Was that really 20+ years ago? How did that happen... I should boot it up and see how well frogfind.com works.
QNX was (is?) such a great OS. This was my first encounter with a microkernel based OS that actually worked, and well.
If I remember correctly, they were moving towards OSS at some point (or at least toward opening it to a wider community). I had it installed in a VM, did some packaging of open source stuff to QNX (bash and irssi, I think), was fun.
At some point they focused on industry/enterprise and that was the end of that for me, but led me to discover L4 later on, and I still have a soft spot for microkernels.
Indeed. I submitted this partly because I get so very tired of Linux zealots claiming that the HURD "proves" that microkernels can't work, or that Minix 3 shows that, OK, they can work, but they're crippled.
Back when QNX was distributed on floppy, my work locked down their computers so that I couldn't dial out to the internet. So I booted up the QNX off the floppy and was browsing within minutes. My boss walked in on me browsing the internet and I nearly got fired over it. He was worried because he thought he might get in trouble for it. Once I explained how it worked, he was less worried.
It was amazing that they could fit a semi-functional browser on a floppy.
I once made an fvwm config designed to look like QNX photon (this was early 2000s). I got dozens of emails about it from people who had found the screenshot and wanted to try it!
If you're on a Windows system, you may have to go to C:\Windows and temporarily rename the HelpPane.exe system application in order to stop it from hijacking the F1 keypress that QNX expects.
Hurd is still stuck using Mach, unfortunately. The issues from the Hurd critique paper haven't been addressed either.
Minix, as cool as it is, does not have a maintainer, and hasn't seen activity for years. There's a lot of out-of-tree work that's just sitting there. It is a shame, because it is a really cool system architecture.
Genode is a modern, proper multiserver OS that has a good architecture, frequent releases and quite solid overall direction. And it has POSIX compatibility, so a lot of software runs, including modern web browser engines.
I used that back when it was current. It was an impressive bit of kit, but TBH, what QNX did on a single floppy made -- and more than ever still makes -- Linux look very very bloated.
Tom's Root-and-Boot just about got you a working command line on one floppy.
For comparison, using FOSS equivalents, QNX got that, plus all of X.org, plus Firefox, onto one floppy.
And if you used the status bar to find your IP address, and went to another machine and put that in a browser's URL box, you found that as well as all that, it was also a live webserver, serving live performance stats to the Internet.
So, kernel, busybox, X server, desktop, web browser AND WEB SERVER on one (very heavily compressed) floppy.
Sadly, the genius who built it died young. Cancer. Fsck cancer.
There was a time before this article when I put QNX on an old iPAQ Pocket PC. Back then (mid 2000s) it was strange to see a Unix OS running on a handheld device!
For one, to put retro systems to great use. For another, to keep as much cruft/unnecessary bells & whistles from being present as possible. Just because the space is there doesn't mean it has to be used.
I interviewed at Panasonic years ago due to their in car info systems using QNX.
Also, Coca-Cola’s kiosks for drink selection uses QNX.
In both cases, they’re using Qt instead of Photon.
Photon was just about perfect as a GUI architecture IMHO, although I don’t know if it could have handled an alpha compositing as well as it handled blit stuff across a network connection.
I consider myself really lucky. Besides the really simple OS's like found on Commodore Vic 20, 64, PET and CBM, QNX is the first full fledged OS I learned on. I'm still a Unix style evangelist to this day. Students one year younger than me learned on Windows, and they didn't cover nearly the amount of ground that my year did.
We had QNX running on a system called an ICON, which had lots of bells and whistles that made using them quite fun. It came with all the languages (the year-after students only learned BASIC and Turing), and had a voice synthesizer. Oh, and way better graphics - you could draw something in QPaint or Logo, and then export it as a header file to use in your C, Pascal, etc programs. We used that functionality for making splash screens for our assignments, even though that wasn't an actual requirement.
I remember getting a copy of QNX “Proton” (?) on a cd-rom with a Danish computer magazine in the late 90ies. About the same time they distributed Red Hat Linux (5.2/Hedwig) and BeOS. I don’t recall if the QNX desktop was actually installable or if it was only a ram-disk. I remember it as blazing fast.
About 15 years ago, QNX was the OS for a couple different brands of multiplexers for ATSC television. After making adjustments to the streams, I’d fire up the solitaire game where the backs of the cards were a homage to the computing science heavyweights.
Besides the qnx thing, a whole os on a floppy disk wasn’t anything cool at that time (the article is from 2008 and I suspect the disk images were some years older).
I remember playing with menuet os and some other linux “floppydistros” back in the day…
Apple macOS, IBM AIX, HP-UX, the 2 old SCO OSes, and -- oddly -- IBM z/OS.
But no, UNIX is very much not generic, and I generally find Linux people get very upset when I call it a UNIX. Which it is, but even 29 years on, people still think that "Unix" means "based on AT&T code".
Band-Aid, Bubble Wrap, Aspirin and many alike aren't generic either, but we use them as generic names. The same applies to Unix. Nobody means "V7 UNIX" or "Only OS implementations that conform to Open Group's standards" when they say "Minix is a Unix". They just mean "conforming to the original Unix design philosophy, as in file system structure, command-line tools, and process management". As I understand, Open Group's approval's just corporate politics to get permission for using the term "Unix" in marketing materials for enterprise customers, nothing else.
You can argue all you want that Minix (the most popular Unix in existence) or Ubuntu (most popular Linux distro in existence) aren't Unix. They're Unix. They're probably more Unix than most Unices on that list. Some company owning the brand and enforcing some arbitrary licensing scheme doesn't change that.
I have long wondered... How was it possible to construct this? Is it just extreme self-discipline, omitting everything not strictly essential? Or is there an underlying formalism that makes a very small amount of code do a very great deal?
It seems like the only way is for most of the code in the system to contribute to a multitude of different uses... Maybe an interpreter with especially powerful, composable operations and a very compact representation, and most of the system coded to that interpreter? (That got Apollo 11 from orbit to the moon and back.)
> I have long wondered... How was it possible to construct this? Is it just extreme self-discipline, omitting everything not strictly essential? Or is there an underlying formalism that makes a very small amount of code do a very great deal?
Two really good designers, Gorden Bell and Dan Dodge.
Here are the key design decisions:
- All the kernel does is manage memory, dispatch processes, handle interrupts and timers, and pass messages from process to process. No device drivers, no file systems. Everything else is in user space. The kernel is small (about 60KB of code in some versions), very well written, and rarely changed. It's so small, so heavily used, and so rarely changed that it reached an essentially bug-free state.
- All non-preemptable kernel operations have fixed upper bounds on how long they can take, worst case. That upper bound is in microseconds. That's why real time works.
- Message passing and CPU scheduling are integrated and very efficient. In particular, the case of sending a message to another process and getting a reply message is not only low overhead, but does not involve losing your turn for the CPU. The two systems are designed together. So calling another process can be used almost like a subroutine call. Most interprocess message systems botch this, and calling another process means two or more trips through the CPU scheduler and may cost you your CPU quantum. Which means a "microservice architecture" runs too slowly. Which means people work around making interprocess calls, putting too much in one process.
- There is no swapping or paging. This is essential for real-time. It simplifies other things. Message passing is copying user space to user space, and can't page fault. The destination area has to be in memory. So there is no need for kernel data buffers.
- At boot time, the boot program loads the kernel, a user space utility process called "proc", plus any user space drivers or programs you want at startup. That's how it gets started with no device drivers in the kernel. More drivers can be loaded later, if desired. Having a file system or disk is optional. The minimal configuration is a CPU, boot ROM with the software, and memory. That's a common configuration for small embedded systems.
- File systems, networking, and drivers are all user programs. Some have the privilege of writing to device space. The kernel turns interrupts into interprocess messages.
It's a nice architecture for small and medium real-time systems that have to Just Work.
This sounds almost exactly like erlang at a lower and even more time critical level which is, super cool.
The idea that the kernel does so little is really cool. I don’t know if it’d work at the true microcontroller level where 128k is precious memory but I can definitely see the larger microcontrollers that exist now or even stuff like fpga socs being useful with this sort of setup. Sounds cool.
> So calling another process can be used almost like a subroutine call.
Any numbers on the overhead of a message send/reply round trip compared to a subroutine call? I always assumed it was just axiomatic that the difference in latency between those two options would be orders of magnitude.
This explains (thank you) how they can provide real-time service and reliability, but does not explain how they cram all the GUI and browser functionality into 1.4MB.
Around 2003 there was a demo CD. This brings up a nice system with a GUI and browser.[1][2]
QNX started closed source with a free version, went open source, went closed source, went open source after an acquisition, and then went closed source when RIM (Blackberry) acquired them. Then RIM dropped the GUI to focus on whatever it is Blackberry still does.
As I once told one of their sales execs, "quit worrying about people pirating your system and worry about people ignoring it". During the first free version period, people were porting open source software such as GCC, Eclipse, and browsers to QNX. With all the licensing changes, the open source community got fed up with QNX and stopped making versions for it.
We used QNX for our DARPA Grand Challenge vehicle. All our desktop machines ran QNX, and we could run the real-time program on them as well as the vehicle. The real time features were so good that we could have the entire real-time vehicle system running, at hard real time priority, and run a browser or compile without missing a time check.
[1] http://toastytech.com/guis/qnx621.html
[2] https://archive.org/details/qnx_momentics_6.2.1
The final fate of self-hosted QNX was super unfortunate, and poor decision making on the part of RIM/Blackberry. QNX completely dropped support for self-hosted QNX Neutrino development after 6.5 [0], and stopped distributing installation media ISOs with 6.6/7.0. Instead you needed to develop everything on macOS/Windows/Linux hosts, make custom specialized images using their published BSPs (Build Support Packages).
So no neutrino hosted compiler toolchain, no desktop. Oh yeah, they also completely killed off their full GUI desktop, the Photon microGUI in QNX 6.6. There was even a working port of Mozilla Firefox to it at some point. You could use all this freely with a hobbyist/non-commercial license in the early 2000s.
[0] http://www.qnx.com/developers/docs/7.0.0/index.html#com.qnx....
As a casual outside observer, it seems to me that dropping self-hosted support was a very sensible move. They probably didn't want to spend resources on PC hardware support, particularly for laptops. And of course, a developer's PC doesn't just run development tools; it also has to handle things like connecting to VPNs and being manageable by company IT departments. Some people need accessibility accommodations (e.g. screen readers, magnifiers, or alternative input methods), and there's no reason to assume that this group doesn't (or couldn't) include some developers of niche embedded systems. The list goes on and on. Even desktop Linux doesn't do a great job on all these things, never mind a niche OS like QNX. So doing cross-development from Windows or macOS makes a lot of sense.
Edit to add: Just had a scary thought. What if Red Hat and whoever else is actually spending money on desktop Linux development applied the same logic to desktop Linux itself that I retroactively applied to self-hosted QNX? After all, non-developers don't use Linux, right? (I'm speculating that they'd make that assumption, not saying it's actually true.) And developers can work with Linux by connecting to a remote machine or running a VM on a "normal" (i.e. Windows or Mac) computer. Is there enough economic incentive to keep maintaining and improving desktop Linux that this won't happen? The death of desktop Linux wouldn't actually hurt me, as I mainly use Windows, but I'd still be sad.
2 replies →
I was using it at that time as my main desktop OS. I ported and wrote several tools of it. The gui itself was simple, nice to use and to program for.
Driver support wasn't extensive, but the available drivers were working fine. With qnxstart.com at the time and all the oss tooling I didn't have anything missing.
I was in love in a way that only beos gave me before.
This was the last commercial/closed-source OS I ever used in no small part due to the license change.
1- What's the FOSS alternative that is as good as QNX nowadays? FreeRTOS, NuttX, Zephyr etc are used for MCUs not general purpose computation AFAIK
2- What tech stack (language) you used for those challenges?
3- Your idea on Lisp (for these applications)?
QNX was a true microkernel architecture that worked, and worked well. The basic building block was called IIRC Send/Receive/Reply: Every "system" call looked like a regular function call, but would "Send" a message to a different process, and (usually) suspend the caller; The other process would "Receive", do whatever was requested, and "Reply", at which point control went back (with the response) to the calling process. IIRC it was also possibly to do async calls, but in that case the other process would call ("Send") the response back, rather than "Reply". I might be confusing this with another system though.
device drivers weren't privileged - they were just another process you called into, and could be restarted in the case of a fault (rather than kernel panic or blue screen).
A system that doesn't provide this is not an alternative to QNX; It's just another operating system (which are all, in some ways, alternative to each other and thus QNX, but ...)
1 reply →
1. Nothing. Someone was writing a QNX-type kernel in Rust. What happened to that? There's L4, but it's too low-level. It's more of a hypervisor. People usually run another OS, usually a stripped-down Linux, on top of L4. QNX offers a POSIX API, so you can run applications directly.
2. C++. Here's the source code. [1]
3. No.
[1] https://github.com/John-Nagle/Overbot/
2 replies →
>What's the FOSS alternative that is as good as QNX nowadays? FreeRTOS, NuttX, Zephyr etc are used for MCUs not general purpose computation AFAIK
SeL4 with CAmkES, or Genode.
5 replies →
In automotive infotainment systems, where QNX was used a lot in the past, it mostly had been replaced with combinations of Linux or Android on the non-realtime critical systems, and smaller microcontrollers which run realtime OS (OSEK and autosar derivates). The latter are usually not open source.
1 reply →
Dunno if it’s an alternative but Linux had RT_PREEMPT and you can build a kernel that’s fully preemptable.
4 replies →
Minix 3 is closest to "as good as" minus the realtime support.
this demo cd is my benchmark of how a computer is supposed to feel like. if UI interactions are in any way slower than what this provides (on, say, 2008-era hardware), it's basically poop
It is a shame yes. There is no current desktop RTOS at all afaik.
I know an RTOS does not guarantee smooth desktop performance but I just would love to try it out to compare.
FWIW, Windows is not a true RTOS, but it does get pretty darn close.
The true hallmarks of an RTOS kernel is hard-realtime scheduling usually with round-robin priorities, and support for priority inversion, which is when a low priority process blocks a high priority one, it inherits the high priority temporarily to meet the deadline.
Windows is threads have a priority, with the highest priority thread occupying the CPU - however there's a series of 'hacks' that allow it to emulate real time behavior.
Threads can get a priority boost in some cases, such as the aforementioned priority inversion case, when the user interacts with the program associated with the thread, when the thread hasn't run for a long time etc.
Additionally there's a set of 'real-time' priorities that can preempt all non-realtime priorities and you need admin or kernel access to set this prio level, as these threads will lock up your system because they can't be preempted.
While I wouldn't trust Windows to control an ICBM, but it's good enough at giving resources to user processes so the your UI feels responsive.
> There is no current desktop RTOS at all afaik.
Plan 9 has deadline scheduling out-of-the-box for real-time. It runs on x86-64, 386, Arm v7 and AArch64 (And more): http://doc.cat-v.org/plan_9/real_time/ (mostly obsolete but describes the motivation and implementation)
See proc(3) man page for deadline scheduling (towards the bottom real-time i described): http://man.9front.org/3/proc (I always recommend the actively maintained 9front fork)
The best part is you don't need special patches or libraries. You simply configure the process/group by writing messages to the procs ctl file using the command line, a script, or from within your program.
3 replies →
Linux with a kernel built with RT_PREEMPT ?
5 replies →
Is the latest version of the open source code available online?
edit: poking around https://archive.org/details/software?query=qnx
> As I once told one of their sales execs, "quit worrying about people pirating your system and worry about people ignoring it".
That's a very frequent comment about tech companies with underperforming sales.
This is how ARM leapfrogged MIPS around 2010. Their licensing was basicaly "you are buying a USB stick". MIPS on other hand was "pre-pay us $100k just to have our attorney to take a look on if we can sell to you"
In those "deep" tech companies, it's absolutely not unusual to have sales staffed by people with zero background knowledge, but nevertheless star sales professionals.
I remember buying a magazine with this floppy disk attached, blew my mind back then. Great marketing from QNX at the time.
How they did it: http://web.archive.org/web/20011106140711/http://www.qnx.com...
Exactly my thought when reading the title. I remember it took a bit of time to boot because my floppy drive was glacially slow at seeking, but once there it was incredibly reactive.
As a wow factor it probably comes a close second place to when I got to experience BeOS hands-on (which was like, how is that even possible)
That bit of web archaeology was very helpful, much appreciated!
QNX always sounded interesting, until one saw just how much effort the company was going to to prevent people from actually using it. Vaguely recall a story about someone trying to buy 50 licenses from them for a prototype kiosk thing but they wanted something like 1,000 minimum for a reseller account; and that killed the project.
Being proprietary killed such an huge large amount of great technology in the 90 and early 2000s.
It also made a large amount of money for those who knew how to walk the fine line between profitability and adoption.
Brought them to life first.
1 reply →
I did some demo projects with QNX in the 90s and I thought it was the best OS ever. Unfortunately trying to license it for use with our company products was a nightmare and after a while I just said the fuck with it.
Now it is basically a company just as predatory as Oracle. It is just sad.
I have fond memories of QNX.
We used QNX in my last company as the foundation for our router. It was a "tandem" HA system (at least one of our lead architects were formerly from Tandem, the company). It had 2x Control Plane (1 Active, 1 Standby) boards, and 3x Data Plane boards (2 Active, 1 Standby). QNX was an important part of our architecture.
Some features I loved in QNX: process control across the network. I could control processes on any of the processors (running QNX) on any of the boards of the system. Launch a program on a different processor with just the appropriate command prefix (which I forget). Also, driver restart: by the nature of being a microkernel, drivers were "just another process", and if they crashed or hung I could just restart/kill the process. Also, tighter coupling between drivers and files under /dev, unlike whatever Linux is doing, especially for networking devices!
While I'm at it, I want to write down the lifecycle of that company.
The router served as a "security endpoint", meaning it could "terminate" (decode), thousands of IPSec connections. Thus it would serve as the "border router" for a network operator.
The company's big hit was providing this product to NTT Docomo for its LTE infrastructure. NTT had the (turned out unique) architectural challenge where they controlled the base stations, the core network... but not the backhaul (connection between the base stations and core)! The backhaul was on shared leased network. So they needed to encrypt [1], hence the IPSec, and hence the need for a "router" that could receive all these connections and decode them to feed into their Core Network.
I joined the company shortly after they scored that huge contract, when they were flush with money and looking to grow.
NTT Docomo was a pioneer in LTE deployment, so our company tried to sell this operating model to the rest of the world... but no-one took it. Turns out most operators just own their backhaul, so didn't feel the need to encrypt, or at least have the same architecture as NTT.
So our company tried for a while to adapt our router (really, network middle-box, and really, its upgraded next version) for other emerging use-cases, but it was hard to get a grip both in emerging network architectures and against the incumbents (lol the number of times we had bugs with Cisco equipment which we proved was Cisco's fault but nope we just had to work around it).
The company was eventually bought at fire sale price by one that did cheap Software-Defined Networking on commodity hardware. Our expensive router was discontinued.
(Also, fuck Broadcom)
[1] It occurs to me that Snowden's revelations in 2013 happened during my tenure there. However the response of many operators was to have one fat encrypted pipe (which we didn't stand out for) rather than many small encrypted ones (which we did).
(edit: also working with NTT Docomo was another level of reliability requirement compared to the half-assery that was tolerated everywhere else!)
I think I know this company: stoke. mobile backhaul SeGW is a big market, it just stoke didn't make it in that market, and its deployment in DCM was replaced entirely not a few years after.
1 reply →
In high school I'd extensively used Windows from 3.1-98se, Linux (Debian, Mandrake), and dabbled a ton with BeOS and QNX (hampered from making either my main OS only by software support).
BeOS and QNX (Photon) were my two favorite desktop experiences of the bunch. They were so much better than the others—yes, very much including Linux. And BeOS was even at least as "friendly" and polished as Windows was at the time.
Here we are and neither's on the desktop and their closest modern equivalent that is prevalent is probably macOS, which is... fine as a consolation prize, I guess, but I still wish I could see a world where either of those made a real splash in the desktop world (I know QNX wasn't really trying to, but man, it performed so much better as a desktop OS than Windows or Linux).
George Washington once said "The best time to become a Haiku contributor is yesterday. The second best time to become a Haiku contributor is today."
You know, I think he was right.
Do you have experience running Haiku and if so what's the current hardware story like? IOW, could a reasonably determined person get it running as a daily driver on a modern laptop?
1 reply →
There was a very short window of time around 2000 when BeOS was viable as a main OS, at least for a high school kid like me. I think I even got rid of Windows entirely and just had a single BeOS partition for a while. It was sooo fast on my little eMachines computer, which was such a breath of fresh air after having hand me down 386s and such that struggled to boot Windows. The only real trouble I remember was the network stack was kind of buggy and had to be restarted every now and then and I think printing was pretty non-existant.
Likewise, between 1996 and 1999 I used BeOS as my main driver and I felt like a smug time-traveller from the future walking amongst the rubes. “One Processor Per Person Is Not Enough”: how prescient they were! I knew they were right from the first moment I read their slogan.
I have extremely fond memories of using BeOS as a main driver OS back in the late nineties (‘96-‘98/‘99). First I had a BeBox (dual 603e-133MHz) and later a dual PIII-300MHz. The former was definitely my favourite hardware platform (‘exotic’ RISC architecture combined with das blinkenlights) while the latter far outclassed it once I finally sorted out the video-card driver issues). An absolutely splendid experience. To this day I still adore the chiselled looks of NeXTStep and BeOS GUIs from the period, but the added colourful “Nintendo-esque” elements of BeOS graphic design attracted me. Oh and the movable yellow tabs across the windows! I was also getting into amateur astronomy and there was a 3D starchart application that utterly awed me. I never knew that sitting in my bedroom in mid 1998 with a BeBox planning an astrophotography shot while Enigma’s Return To Innocence blaring in the background would become the high water mark memory of my late adolescence.
For quite a while, watching 1997 BeOS demo brought tears to my eyes. It was so sweetly designed in every way. Maybe except the regular multithreading issues. Even the source code, at least the small bit I saw [0], was utterly brilliant.
[0] part of the FS query language, so you could select/filter through file metadata for free.
> yes, very much including Linux
That makes perfect sense for that period, the Linux desktop experience was, well, not atrocious, but definitely left things to be desired.
Well a lot had to be configured especially for non-standard hardware. But generally it was far more stable and snappy than Windows. Also constant reinstalling and rebooting wasn't necessary. The QNX demo was nice but to be fair you couldn't do much with it unless you wrote your own software I guess...
It wasn't just that: it was much worse at handling multi-tasking and keeping the UI responsive, than either of those. But so was Windows, to be fair.
I'm pretty sure it's not much better now but hardware's powerful enough to make that less-painful.
both BeOS and QNX were a breath of fresh air after Windows 98. same as yourself, tried both as my main OS in early 2000s while at university.
thanks for the reminder!
SqueakNOS was a project to build a complete operating system via Squeak. In this way you can quickly hack it. There is a great page about these initiatives here: http://wiki.squeak.org/squeak/5727
Prior to SqueakNOS we implemented this: http://swain.webframe.org/squeak/floppy/ (using Linux and modifying Squeak to work with SVGALib instead of X) in just 900mb inspired in this QNX Demo Disk.
I think you meant 900 kB not MB.
TBH, even now, 900 meg isn't very impressive. ;-)
Right, thanks!
This brings back memories. I remember marveling at this with my best friend when it came out. I got into BeOS around the same time.
I used to use those ad-supported dial up ISP’s and found one that worked with a standard PPP dialer so I didn’t need their software. I remember carrying around the QNX disk and login info so I could get online with basically any computer.
> I used to use those ad-supported dial up ISP’s and found one that worked with a standard PPP dialer so I didn’t need their software
Juno? Netzero?
I feel like it was literally called “freeinternetaccess”, definitely not one of the major ones like NetZero.
I got into these back in the late 90s because of the hackable 3Com Audrey "internet appliance" (remember that term?)
I never hear about QNX anymore, so this is def a blast from the past.
I love how this gets rediscovered by people every few years :)
I still have a floppy set somewhere. I loved it. I ran it in a 486 IBM all in one I had with a compatible NIC for a long time as a conversation piece and guest light surfing machine. Amazing how well it ran up till fairly recently when standards outstripped its browser too much
I was there at the time. I just amuse myself digging up obscure, mostly-forgotten languages and OSes and posting them on HN to blow the kids' minds. :-D
Doing the Lord’s Work lol.
I need to rebuild my Ccmp collection. I ended up selling or giving it all away due to having to move. sniff
I ran a NeXTstation Turbo Color as my daily driver up till around 2010. Me and some friends ported over newer line and what not for openstep 4.2.
33mhz 040 with 128mb of 60ns EDO RAM, SCSI HDD. Amazing what you could do with adequate performance on that.
Also ran a BeOS r5 system with massive amounts of hacks and updates for way longer than was really reasonable lol
1 reply →
I still have the Pentium box I used to run it on. Was that really 20+ years ago? How did that happen... I should boot it up and see how well frogfind.com works.
QNX was (is?) such a great OS. This was my first encounter with a microkernel based OS that actually worked, and well.
If I remember correctly, they were moving towards OSS at some point (or at least toward opening it to a wider community). I had it installed in a VM, did some packaging of open source stuff to QNX (bash and irssi, I think), was fun.
At some point they focused on industry/enterprise and that was the end of that for me, but led me to discover L4 later on, and I still have a soft spot for microkernels.
Indeed. I submitted this partly because I get so very tired of Linux zealots claiming that the HURD "proves" that microkernels can't work, or that Minix 3 shows that, OK, they can work, but they're crippled.
A touching tribute to the creator of the demo. https://openqnx.com/node/298
Back when QNX was distributed on floppy, my work locked down their computers so that I couldn't dial out to the internet. So I booted up the QNX off the floppy and was browsing within minutes. My boss walked in on me browsing the internet and I nearly got fired over it. He was worried because he thought he might get in trouble for it. Once I explained how it worked, he was less worried.
It was amazing that they could fit a semi-functional browser on a floppy.
I once made an fvwm config designed to look like QNX photon (this was early 2000s). I got dozens of emails about it from people who had found the screenshot and wanted to try it!
Here is an old screenshot from ~2003: http://www.xwinman.org/screenshots/fvwm2-taviso.png
All the panels and menus were all fwvm, it's highly configurable. I dunno, it kinda holds up!
I once used this for a while :-) Thx!
You can kind of play around with QNX by going here:
https://copy.sh/v86/?profile=qnx
If you're on a Windows system, you may have to go to C:\Windows and temporarily rename the HelpPane.exe system application in order to stop it from hijacking the F1 keypress that QNX expects.
Closest thing we have today (microkernel multiserver OS with a desktop) is Genode[0].
0. https://www.genode.org
Why Genode over Hurd and Minix considering those are also Unix-like same as QNX (which still exists btw)?
Hurd is still stuck using Mach, unfortunately. The issues from the Hurd critique paper haven't been addressed either.
Minix, as cool as it is, does not have a maintainer, and hasn't seen activity for years. There's a lot of out-of-tree work that's just sitting there. It is a shame, because it is a really cool system architecture.
Genode is a modern, proper multiserver OS that has a good architecture, frequent releases and quite solid overall direction. And it has POSIX compatibility, so a lot of software runs, including modern web browser engines.
QNX with its realtime microkernel and its GUI is probably one of the most underestimated OSes out there ... real pity its not free and open source.
When done properly about two decades ago it would slingshot it against Linux really well.
Reminds me of tomsrtbt
http://www.toms.net/rb/
I used that back when it was current. It was an impressive bit of kit, but TBH, what QNX did on a single floppy made -- and more than ever still makes -- Linux look very very bloated.
Tom's Root-and-Boot just about got you a working command line on one floppy.
For comparison, using FOSS equivalents, QNX got that, plus all of X.org, plus Firefox, onto one floppy.
And if you used the status bar to find your IP address, and went to another machine and put that in a browser's URL box, you found that as well as all that, it was also a live webserver, serving live performance stats to the Internet.
So, kernel, busybox, X server, desktop, web browser AND WEB SERVER on one (very heavily compressed) floppy.
Sadly, the genius who built it died young. Cancer. Fsck cancer.
https://openqnx.com/node/298
> For comparison, using FOSS equivalents, QNX got that, plus all of X.org, plus Firefox, onto one floppy.
Except it’s not really ALL of X, and the browser is more like IE2 in capabilities…
For a more fair comparison, where QNX wins in terms of absolute size, but Linux wins in terms of functionality, there’s muLinux: http://micheleandreoli.org/public/Software/mulinux/
1 reply →
I remember being impressed by the Geoworks environment that was contained in America Online 1.0 (?) for DOS.
So snappy and smooth compared to Windows 3.11 on the same 2MB 386.
Now FOSS:
https://github.com/bluewaysw/pcgeos
There was a time before this article when I put QNX on an old iPAQ Pocket PC. Back then (mid 2000s) it was strange to see a Unix OS running on a handheld device!
EDIT: I think I found a link for it:
https://eqip.openqnx.com/sites/eqip.openqnx.com/files/ipaq_b...
I wonder what could be improved with todays knowledge while keeping it under 2MB.
I don't think there is anything today that comes close to equalling it, and very definitely nothing that can improve on it.
Even without knowing it, I'm sure it's far from perfect and may enjoy a few tweaks here and there.
3 replies →
Why would you keep it under 2MB?
I restricted myself, I wanted to say 1MB.
I can't help but to find the min-max of everything. Well at least fantasize about what could be.
For one, to put retro systems to great use. For another, to keep as much cruft/unnecessary bells & whistles from being present as possible. Just because the space is there doesn't mean it has to be used.
5 replies →
A while back, Ford used QNX for their infotainment. Not sure if they still do.
I interviewed at Panasonic years ago due to their in car info systems using QNX.
Also, Coca-Cola’s kiosks for drink selection uses QNX.
In both cases, they’re using Qt instead of Photon.
Photon was just about perfect as a GUI architecture IMHO, although I don’t know if it could have handled an alpha compositing as well as it handled blit stuff across a network connection.
Other automotive OEMs are using it also for other use cases, such as compute nodes doing plenty of crunching mixed with real-time control.
I consider myself really lucky. Besides the really simple OS's like found on Commodore Vic 20, 64, PET and CBM, QNX is the first full fledged OS I learned on. I'm still a Unix style evangelist to this day. Students one year younger than me learned on Windows, and they didn't cover nearly the amount of ground that my year did.
We had QNX running on a system called an ICON, which had lots of bells and whistles that made using them quite fun. It came with all the languages (the year-after students only learned BASIC and Turing), and had a voice synthesizer. Oh, and way better graphics - you could draw something in QPaint or Logo, and then export it as a header file to use in your C, Pascal, etc programs. We used that functionality for making splash screens for our assignments, even though that wasn't an actual requirement.
This is one of my favorite things in computing.
I used to code a system that ran QNX 4 - it was great - not quite Linux but fast and responsive.
The only issue was the file system and driver support - if it crashed there was not good recovery tools.
Ok, so skipping the kernel bit, where can I download the source to a GUI and a browser that runs off of 1.44MB binary?
Why can't we run one of those on a generic Linux kernel?
Something similar: https://news.ycombinator.com/item?id=28515025
I remember getting a copy of QNX “Proton” (?) on a cd-rom with a Danish computer magazine in the late 90ies. About the same time they distributed Red Hat Linux (5.2/Hedwig) and BeOS. I don’t recall if the QNX desktop was actually installable or if it was only a ram-disk. I remember it as blazing fast.
About 15 years ago, QNX was the OS for a couple different brands of multiplexers for ATSC television. After making adjustments to the streams, I’d fire up the solitaire game where the backs of the cards were a homage to the computing science heavyweights.
Ages ago I had original QNX Demo Disk. I was impressed that they could fit so much on a floppy.
This is the original QNX Demo Disk. That's why I posted it. :-)
Downloads here:
https://winworldpc.com/product/qnx/144mb-demo
Screenshots here:
http://toastytech.com/guis/qnxdemo.html
Oh the memories. Thank you!
1 reply →
Same. I pulled aside peers to show them. I was gobsmacked.
Honestly I'm still impressed. It was astonishing at the time.
I remember attending the Embedded Systems Conference frequently during the late 90s and early 2000s. QNX always had a strong presence there.
It always seemed too "big" (too capable, too complex, too pricey) for my projects so I never took it for a spin.
Besides the qnx thing, a whole os on a floppy disk wasn’t anything cool at that time (the article is from 2008 and I suspect the disk images were some years older).
I remember playing with menuet os and some other linux “floppydistros” back in the day…
Must be "old OS week" seeing this and the recent OS/2 post: https://news.ycombinator.com/item?id=33107065
Here the same, but done with linux: https://news.ycombinator.com/item?id=28515025
I suspect the functionality is rather less, but still, that is impressive.
(2008)
Is there any good way to play with QNX on a workstation/VM to get familiar with it?
You can go to qnx.com and click the "FREE 30-DAY TRIAL" button to download.
This needs a [1999] or a [2008] in the title. (Original release, modification date.)
Somewhat similar to MenuetOS: https://news.ycombinator.com/item?id=31290789
I had one of those in another life :-)
"xNix", really? I thought we'd settled on using Unix as a generic name for all Unix-like operating systems.
Unix is very much not generic, and is ®, ™ and © to the Open Group.
Novell donated the trademark to them when it bought Bell Labs in 1993.
1 Linux is currently a registered UNIX™: Huawei EulerOS
https://www.opengroup.org/openbrand/register/brand3622.htm
Formerly Inspur K-UX was, too:
https://www.opengroup.org/openbrand/register/brand3617.htm
You can view the list here:
https://www.opengroup.org/openbrand/register/
Apple macOS, IBM AIX, HP-UX, the 2 old SCO OSes, and -- oddly -- IBM z/OS.
But no, UNIX is very much not generic, and I generally find Linux people get very upset when I call it a UNIX. Which it is, but even 29 years on, people still think that "Unix" means "based on AT&T code".
Band-Aid, Bubble Wrap, Aspirin and many alike aren't generic either, but we use them as generic names. The same applies to Unix. Nobody means "V7 UNIX" or "Only OS implementations that conform to Open Group's standards" when they say "Minix is a Unix". They just mean "conforming to the original Unix design philosophy, as in file system structure, command-line tools, and process management". As I understand, Open Group's approval's just corporate politics to get permission for using the term "Unix" in marketing materials for enterprise customers, nothing else.
You can argue all you want that Minix (the most popular Unix in existence) or Ubuntu (most popular Linux distro in existence) aren't Unix. They're Unix. They're probably more Unix than most Unices on that list. Some company owning the brand and enforcing some arbitrary licensing scheme doesn't change that.
3 replies →
I miss QNX.
I have long wondered... How was it possible to construct this? Is it just extreme self-discipline, omitting everything not strictly essential? Or is there an underlying formalism that makes a very small amount of code do a very great deal?
It seems like the only way is for most of the code in the system to contribute to a multitude of different uses... Maybe an interpreter with especially powerful, composable operations and a very compact representation, and most of the system coded to that interpreter? (That got Apollo 11 from orbit to the moon and back.)
> I have long wondered... How was it possible to construct this? Is it just extreme self-discipline, omitting everything not strictly essential? Or is there an underlying formalism that makes a very small amount of code do a very great deal?
Two really good designers, Gorden Bell and Dan Dodge.
Here are the key design decisions:
- All the kernel does is manage memory, dispatch processes, handle interrupts and timers, and pass messages from process to process. No device drivers, no file systems. Everything else is in user space. The kernel is small (about 60KB of code in some versions), very well written, and rarely changed. It's so small, so heavily used, and so rarely changed that it reached an essentially bug-free state.
- All non-preemptable kernel operations have fixed upper bounds on how long they can take, worst case. That upper bound is in microseconds. That's why real time works.
- Message passing and CPU scheduling are integrated and very efficient. In particular, the case of sending a message to another process and getting a reply message is not only low overhead, but does not involve losing your turn for the CPU. The two systems are designed together. So calling another process can be used almost like a subroutine call. Most interprocess message systems botch this, and calling another process means two or more trips through the CPU scheduler and may cost you your CPU quantum. Which means a "microservice architecture" runs too slowly. Which means people work around making interprocess calls, putting too much in one process.
- There is no swapping or paging. This is essential for real-time. It simplifies other things. Message passing is copying user space to user space, and can't page fault. The destination area has to be in memory. So there is no need for kernel data buffers.
- At boot time, the boot program loads the kernel, a user space utility process called "proc", plus any user space drivers or programs you want at startup. That's how it gets started with no device drivers in the kernel. More drivers can be loaded later, if desired. Having a file system or disk is optional. The minimal configuration is a CPU, boot ROM with the software, and memory. That's a common configuration for small embedded systems.
- File systems, networking, and drivers are all user programs. Some have the privilege of writing to device space. The kernel turns interrupts into interprocess messages.
It's a nice architecture for small and medium real-time systems that have to Just Work.
This sounds almost exactly like erlang at a lower and even more time critical level which is, super cool.
The idea that the kernel does so little is really cool. I don’t know if it’d work at the true microcontroller level where 128k is precious memory but I can definitely see the larger microcontrollers that exist now or even stuff like fpga socs being useful with this sort of setup. Sounds cool.
> So calling another process can be used almost like a subroutine call.
Any numbers on the overhead of a message send/reply round trip compared to a subroutine call? I always assumed it was just axiomatic that the difference in latency between those two options would be orders of magnitude.
2 replies →
I learned a lot from this comment, thank you. If you could add - what would be a good architecture for large RT systems?
This explains (thank you) how they can provide real-time service and reliability, but does not explain how they cram all the GUI and browser functionality into 1.4MB.
Is there anything about L4 to make make it worse in the role of that kernel?