Moss: a Rust Linux-compatible kernel in 26,000 lines of code

3 months ago (github.com)

Hello!

For the past 8 months, or so, I've been working on a project to create a Linux-compatible kernel in nothing but Rust and assembly. I finally feel as though I have enough written that I'd like to share it with the community!

I'm currently targeting the ARM64 arch, as that's what I know best. It runs on qemu as well as various dev boards that I've got lying around (pi4, jetson nano, AMD Kria, imx8, etc). It has enough implemented to run most BusyBox commands on the console.

Major things that are missing at the moment: decent FS driver (only fat32 RO at the moment), and no networking support.

More info is on the github readme.

https://github.com/hexagonal-sun/moss

Comments & contributions welcome!

  • Cool project, congrats. I like the idea with libkernel which makes debugging easier before going to "hardware". It's like the advantages of a microkernel achievable in a monolithic kernel, without the huge size of LKL, UML or rump kernels. Isn't Rust async/awat depending on runtime and OS features? Using it in the kernel sounds like an complex bootstrap challenge.

    • Rust's async-await is executor-agnostic and runs entirely in userspace. It is just syntax-sugar for Futures as state machines, where "await points" are your states.

      An executor (I think this is what you meant by runtime) is nothing special and doesn't need to be tied to OS features at all. You can poll and run futures in a single thread. It's just something that holds and runs futures to completion.

      Not very different from an OS scheduler, except it is cooperative instead of preemptive. It's a drop in the ocean of kernel complexities.

      12 replies →

    • This has been a real help! The ability to easily verify the behavior of certain pieces of code (especially mem management code) must have saved me hours of debugging.

      Regarding the async code, sibling posts have addressed this. However, if you want to get a taste of how this is implemented in Moss look at src/sched/waker.rs, src/sched/mod.rs, src/sched/uspc_ret.rs. These files cover the majority of the executor implementation.

  • > no networking support

    Would something like Smoltcp be of help here? https://github.com/smoltcp-rs/smoltcp

    Great project either way!

    How do you decide which sys calls to work on? Is is based on what the user space binaries demand?

    • Yip, I panic whenever I encounter a syscall that I can't handle and that prompts me to implement it.

      Yeah, I was thinking of integrating that at some point. They've done a really nice job of keeping it no_std-friendly.

      1 reply →

  • Love the MIT license. If this were further along we could use this as the foundation of our business without having to "give back" device drivers and other things.

    • This should be the sort of red flag to take note of. There’s an LLVM fork for every esoteric architecture now and this sort of thinking will lead to never being able to run your own software on your own hardware again. A reversion to the dark ages of computing.

      11 replies →

    • MIT licensed code is a gift. A gift indeed doesn't require the recipient to give back anything related to the gift.

      A "gift" requiring GPL-like conditions isn't really a gift in the common sense. It's more like a contractual agreement with something provided and specific, non-negotiable obligations. They're giving while also asserting control over others' lives, hoping for a specific outcome. That's not just a gift.

      People doing MIT license are often generous enough where the code is a gift to everyone. They don't try to control their lives or societal outcomes with extra obligations. They're just giving. So, I'm grateful to them for both OSS and business adaptations of their gifts.

      23 replies →

  • Very impressive and I like how accessible the codebase is. Plus safe Rust makes it very hard to shoot yourself on the foot, which is good for outside contributions. Great work!

    After you got the busybox shell running, how long did it take to add vim support? What challenges did you face? Did you cross-compile it?

  • Congratulations on the progress. If I may ask, I'm curious what considerations have motivated your choice of licence (especially since pushover licences seem extremely popular with all kinds of different Rust projects, as opposed to copyleft).

    • I’ve pretty much only seen MIT and to a lesser extent GPL on most open source projects. Would you expect a different license?

    • Copyleft doesn't work well with Rust's ecosystem of many small crates and heavy reliance on libraries alongside static linking.

      If one library be GPLv2 and the other GPLv3 they couldn't be used together in one project. LGPL solves nothing because it's all statically linked anyway. And yes, one could licence under both under the user's choice but then GPLv4 comes out and the process repeats itself, and yes one could use GPLv2+ but people aren't exactly willing to licence under a licence that doesn't yet exist and put blind faith into whoever writes it.

      Using anything but a permissive licence is a good way to ensure no one will lose your library and someone will just re-implement it under a permissive licence.

      C is a completely different landscape. Libraries are larger and the wheel is re-invented more often and most of all dynamic linking is used a lot so the LGPL solves a lot.

      3 replies →

  • Impressive work! Do you have any goals, other than learning and having fun?

    Also how does it's design compare with Redox and Asterinas?

  • How does android compatibility look? Can this be compiled to WebAssembly and run in browser?

The choice of MIT for a kernel feels like setting up the project to be cannibalized rather than contributed to.

We've seen this movie before with the BSDs. Hardware vendors love permissive licenses because they can fork, add their proprietary HAL/drivers, and ship a closed binary blob without ever upstreaming a single fix.

Linux won specifically because the GPL forced the "greedy" actors to collaborate. In the embedded space, an MIT kernel is just free R&D for a vendor who will lock the bootloader anyway.

  • Not sure why am getting in the middle of this but I need to point out that you are not even correct for Linux.

    Linux rather famously has avoided the GPL3 and is distributed under a modified GPL2. This license allows binary blob modules. We are all very familiar with this.

    As a result, the kernel that matches your description above that ships in the highest volume is Linux by a massive margin. Can you run a fully open source Linux kernel on your Android phone? Probably not. You do not have the drivers. You may not pass the security checks.

    Do companies like Broadcomm “collaborate” on Linux even in the PC or Mac space? Not really.

    On the other side, companies that use FreeBSD do actually contribute a lot of code. This includes Netflix most famously but even Sony gives back.

    The vast majority of vendors that use Linux embedded never contribute a single line of code (like 80% or more at least - maybe 98%). Very few of them even make the kernel code they use available. I worked in video surveillance where every video recorder and camera in the entire industry is Linux based at this point. Almost none of them distribute source code.

    But even the story behind the GPL or not is wrong in the real world.

    You get great industry players like Valve that contribute a lot of code. And guess what, a lot of that code is licensed permissively. And a lot of other companies continue to Mesa, Wayland, Xorg, pipewire, and other parts of the stack that are permissively licensed. The level of contribution has nothing to do with the GPL.

    How about other important projects? There are more big companies contributing to LLVM/Clang (permissive) than there are to GCC (GPL).

    In fact, the GPL often discourages collaboration. Apple is a great example of a company that will not contribute to even the GPL projects that they rely on. But they do contribute a fair bit of Open Source code permisssively. And they are not even one of the “good guys” in Open Source.

    This comment is pure ideological mythology.

    • A few vendors have been stopped from shipping binary modules with Linux, notably those linking to certain symbols. Enough vendors have contributed enough to make Linux actually usable on the desktop with a wide range of off the shelf hardware and more and more are announcing day one compatibility or open source contributions. The same is hardly true for the BSDs.

      It's obvious Sony is keeping certain drivers closed source while open sourcing other things, and why Nvidia decided to go with an open source driver. It's not hard to understand why, it could be some pressure or a modified GPL2.

    • > In fact, the GPL often discourages collaboration

      Not true. Yes, companies choose not to contribute, so they discourage themselves. It's not inherent to the GPL.

      1 reply →

    • >Probably not.

      Probably not, but possibly yes. Which is more than the cuck license guarantees. See postmarketOS and such, which would be impossible in a BSD world.

      >The vast majority of vendors that use Linux embedded never contribute a single line of code

      It doesn't matter. The point is just that they can be legally compelled to if needed. That is better than nothing.

      >The level of contribution has nothing to do with the GPL.

      None of this would be feasible if linux wasn't a platform where the drivers work. They wouldn't have worked on the linux userspace in the first place if it didn't have driver support: it wouldn't be a viable competitor to windows and the whole PC platform would probably be locked down anyways without a decent competitor. Permissive software is parasitic in this sense that it benefits from inter-operating in a copyleft environment but cooperates with attempts to lock down the market.

      LLVM was made after GCC and is designed with a different architecture. It is apples and oranges.

      Apple is a great example of a company that is flooding the world with locked-down devices. Everything they do is an obstacle to general purpose computing. What do they meaningfully commit to the public domain? Swift? Webkit? It is part of a strategy to improve their lock-in and ultimately make collaboration impossible.

  • I think GCC is the real shining example of a GPL success, it broke through a rut of high cost developer tooling in the 1990s and became the de facto compiler for UNIX and embedded BSPs (Board Support Packages) while training corporations on how to deal with all this.

    But then LLVM showed up and showed it is no longer imperative to have a viral license to sustain corporate OSS. That might've not been possible without the land clearing GCC accomplished, but times are different now and corporations have a better understanding and relationship with OSS.

    The GPL has enough area to opt out of contributing (i.e. services businesses or just stacking on immense complexity in a BSP so as to ensure vendor lockin) that it isn't a defining concern for most users.

    Therefore I don't think Linux' success has much to do with GPL. It has been effective in the BSP space, but the main parts most people care about and associate with Linux could easily be MIT with no significant consequence on velocity and participation. In fact, a lot of the DRM code (graphics drivers) are dual-licensed thusly.

    • > But then LLVM showed up and showed it is no longer imperative to have a viral license

      I am not sure I remember everything right, but as far as I remember Apple originally maintained a fork of gcc for its objective-c language and didn't provide clean patches upstream, instead it threw its weight behind LLVM the moment it became even remotely viable so it could avoid the issue entirely.

      Also gcc didn't provide APIs for IDE integration early on, causing significant issues with attempts to implement features like refactoring support on top of it. People had the choice of either using llvm, half ass it with ctags or stick with plain text search and replace like RMS intended.

  • > Linux won specifically because the GPL forced the "greedy" actors to collaborate.

    How do we know that? It seems to me that a greater factor in the success of Linux was the idealism and community. It was about freedom. Linux was the "Revolution OS" and the hacker community couldn't but fall in love with Linux and its community that embodied their ideals. They contributed to it and they founded new kinds of firms that (at least when they began) committed themselves to respect those principles.

    I realise the memory of Linux's roots in hacker culture is fading away fast but I really do think this might have been the key factor in Linux's growth. It reached a critical mass that way.

    I'm quite certain of the fact that this was more important anyway than the fact that, for instance, Linksys had to (eventually! they didn't at first) release the source code to their modifications to the Linux kernel to run on the WRT54G. I don't think things like that played much of a role at all.

    Linksys were certainly kind enough to permit people to flash their own firmware to that router, and that helped grow Linux in that area. They even released a special WRT54GL edition to facilitate custom firmware. But they could just as easily have Tivoised it (something that the Linux licence does not forbid) and that would've been the end of the story.

    • We can't really prove it but I noticed a lot of people worked on BSD for a few years, got poached by Sun/NeXT/BSDI/NetApp, then mostly stopped contributing to open source. Meanwhile early Linux devs continued contributing to Linux for decades.

  • Kinda sad that the top comment on this really interesting project is complaining about the license, reiterating the trite conventional wisdom on this topic,which is based on basically two data points (Linux and BSD) (probably because any time someone tries something new, they get beaten down by people who complain that BSD and Linux already exist, but that's another topic).

  • This comment does not contribute to discussion of TFA: it's just license flamewar bait.

    The authors almost certainly gave a bit of thought to their choice of license. The choice of license is a "business choice" that has to do with the author(s)' goals, and it is a choice best seen as intending to achieve those goals. Those goals can be very different from your own goals, and that's fine! There is no need to shame TFA for their choice of license, or implicitly for their goals as opposed to yours.

  • This comment is a tangential distraction, but it's not even correct. Linus Torvalds has specifically claimed that he wouldn't have created Linux at all if 386BSD was available at the time. But BSD was tied up in a lawsuit with USL, discouraging companies and individuals from use.

  • Not meaning to single you out specifically, but this entire discussion — all of this license gatekeeping is ridiculous. This is a very cool project, but if the license ruins it for you, there are zillions of open source GPL3 kernels.

    I mean, this is not different from bitching about someone writing their custom kernel in C++ instead of Rust, or Zig. It’s not your project! Let people do their own thing! MIT is a perfectly fine license; maybe the lack of zealotry associated with it would even be a positive thing for whatever community might be built around this eventually, if the author is even interested in having other contributions.

Very cool project! I do have to admit - looking far, far into the future - I am a bit scared of a Linux ABI-compatible kernel with an MIT license.

  • I agree, I know a lot of people aren't huge fans of it but in the long run Linux being GPL2 was a huge factor in its success.

  • Too late? https://docs.freebsd.org/en/books/handbook/linuxemu/

    • Somewhere there is a dark timeline where the BSDs won, there are 50 commercial and open source variants all with their own kernel and userland. The only promise of interoperability is in extremely ossified layers like POSIX. There is, however, something terrible gathering its strength. A colossus. The great Shade that will eat the net. In boardroom meetings across the land, CTOs whisper its name and tremble... "OS/2."

      1 reply →

  • > I am a bit scared of a Linux ABI-compatible kernel with an MIT license.

    What's the threat? Solaris/Illumos, the BSDs, even Windows, have all tried -sometimes more than once- to be compatible with the Linux ABI, and in the end they've all given up because the Linux ABI evolves way too fast to keep up and is underdocumented. Someday someone -perhaps TFA- will succeed in building momentum for a well-defined and highly functional least common denominator subset of the Linux ABI, and that will be a very good thing (IMO) regardless of their choice of license.

    I guess you imagine that everyone will switch to Moss and oh-noes!-everyone-will-be-free-to-not-contribute-back!! So what?

  • Why?

    • because otherwise big tech companies will take it and modify and release hardware with it without releasing patches etc? Basically being selfish and greedy?

      9 replies →

    • Because unlike most other functionality, you generally need hw specs or cooperation to write drivers (see Nvidia GSP).

      Anyone can write Photoshop (provided reasonable resources). The problem is going to be proprietary file format and compatibility with the ecosystem. It's same with hardware, except several orders of magnitude worse.

  • FreeBSD already has Linux ABI compatibility and has for a long time.

    I have to say the GPL trolling in this post is some of the worst I've ever seen on HN. Literally 99% of the comments GPL trolls coming in and thread shitting everywhere. It's genuinely disgusting.

Really neat. Do you have any specific long term goals for it? Eg, provide an OS distro (using Linux drivers?) to provide memory safety for security-critical contexts?

Also, are there any opportunities to make this kernel significantly faster than Linux’s?

  • Eventually, It'd be amazing to use Moss as my daily driver OS. That means targeting the specific hardware that I have, but in doing so, I hope to build up enough of the abstractions to allow easier porting of hardware.

    A more concrete mid-term goal is for it to be 'self-hosting'. By that I mean you could edit the code, download dependencies and compile the kernel from within Moss.

    • Are you interested in beating Linux performance-wise? Eg:

      - Moving away from the too-small 4kb default page size (while having a good strategy for dealing with fragmentation)?

      - Make it easy to minimize/track interrupts on a core, for low-latency contexts

I don't know much about Linux internals - how difficult would it be to reimplement KVM? I'm guessing a big undertaking.

In what extent is this compatible with Linux?

Could I swap Ubuntu's or Android's kernel with this, while keeping those OSes bootable?

  • While it's very legitimate question, the answer is between the lines in the README, and it mostly means that there is a user space binary compatibility for everything that is implemented.

    It might seem obscure, but syscalls to get access to kernel requires a tight integration on compilation and linking. So this is their approach and this is where the compatibility really means something : since you can cross compile on another machine, they don't need the full toolchain right away. Just compile your code on a linux machine, and run it there. You're at the mercy of all missing kernel API implementations, but it looks like a very good strategy if you aim is to code a kernel, as you only have to focus on actual syscalls implementation without getting distracted by toolchain.

very impressive! i think this is a far better approach to bringing rust's advantages to linux rather than trying to squeeze rust into the existing linux kernel. best of luck!

[flagged]

  • I understand that you've only been on HN for 7 days, but please don't do this. It's gross.

  • Just about everything of worth in operating systems (and in software in general) was already invented in those decades.

  • Just shows how little we have achieved since then. In both hardware architecture and software based on that hardware.

  • Wait until they get to the networking layer; you're going to hate what Vint Cerf did in the 70s :)