This is the approach used by Kata Containers/Firecracker. It's not much heavier than the shared kernel approach, but has significantly better security. An bug in the container runtime doesn't immediately break the separation between containers.
The performance overhead of the VM is minimal, the main tradeoffs is container startup time.
I could imagine one Linux kernel running in a VM (on top of MacOS) and then containers inside that host OS. So 1 base instance (MacOS), 1 hypervisor (Linux L0), 12 containers (using that L0 kernel).
Shoutout to Michael Crosby, the person in this video, who was instrumental in getting Open Containers (https://opencontainers.org) to v1.0. He was a steady and calm force through a very rocky process.
"A new report from Protocol today details that Apple has gone on a cloud computing hiring spree over the last few months... Michael Crosby, one of a handful of ex-Docker engineers to join Apple this year. Michael is who we can thank for containers as they exist today. He was the powerhouse engineer behind all of it, said a former colleague who asked to remain anonymous."
I would assume that "lightweight" in this case means that they share a single Linux kernel. Or that there is an emulation layer that maps the Linux Kernel API to macOS. In any case, I don't think that they are running a Linux kernel per container.
interesting choice - doesn't that then mean that container to container integration is going to be harder and a lot of overhead per-container? I would have thought a shared VM made more sense. I wonder what attracted them to this.
FreeBSD has linuxulator and illumos comes with lx-zones that allow running some native linux binaries inside a "container". No idea why Apple didn't go for similar option.
syscalls are just a fraction of the surface area. There are many files in many different vfs you need to implement, things like selinux and ebpf, iouring, etc. It's also a constantly shifting target. The VM API is much simpler, relatively stable, and already implemented.
Emulating Linux only makes sense on devices with constrained resources.
The CLI from the press release/WWDC session is at https://github.com/apple/container which I think is what many like myself would be interested in. I was hoping this'd be shipped with the newest Xcode Beta but that doesn't seem to be the case. Prebuilt packages are missing at the moment but they are working on it: https://github.com/apple/container/issues/54
Well it makes developing Docker Desktop infinitely easier for them, since they no longer need to start their own Linux VM under the hood. I think the software is "sticky" enough that people will still prefer to use Docker Desktop for the familiar CLI and UX, Docker Compose, and all the Docker-specific quirks that make migrating to a different container runtime basically impossible.
Docker Desktop on Windows uses WSL to provide the Docker daemon, doesn't it? So Docker Desktop has a history of leaning into OS offerings for virtualizing Linux like this where they can.
Unless this provides an extremely compatible Docker socket implementation, this is the answer. When Docker changed the licensing for Docker Desktop, my previous employer made it harder to get permission. However, there were a few tools that were in common usage, and once you mentioned that you used them, you got your permission.
Some progress has been made to create a non-Docker implementation that integrates with all those random tools that expect to be able to yeet bytes into or out of the Docker socket, but I still hit blockers the last time I tried.
This doesn't compete with Docker for Desktop, as more low-level than that.
Docker for Desktop sits on-top of container/virtualization software (Hypervisor.framework and QEMU on Mac, WSL on Windows, containerd on Linux). So there's a good chance that future versions of Docker for Desktop will use this library, but they don't really compete with each other.
I guess it'll depend on whether or not this starts shipping by default with newacOS installs.
If it doesn't, then it's still a toss-up whether or not user chooses docker/podman/this...etc.
If it ends up shipping by default and is largely compatible with the same command line flags and socket API... Then docker has a problem.
For what it's worth, I prefer podman but even on Linux where the differentiators should be close to zero, I still find certain things that only docker does.
This is the most surprising and interesting part, imo:
> Contributions to `container` are welcomed and encouraged. Please see our main contributing guide for more information.
This is quite unusual for Apple, isn't it? WebKit was basically a hostile fork of KHTML, Darwin has been basically been something they throw parts of over the wall every now and then, etc.
I hope this and other projects Apple has recently put up on GitHub see fruitful collaboration from user-developers.
I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux. Over the past couple of years, Apple Silicon has convinced me to use an Apple computer as my main laptop at home (nowadays more comparable, Linux-friendly alternatives seem closer now than when I got my personal MacBook, and I'm still excited for them). This kind of thing seems like a positive change that lets me feel less conflicted.
Anyway, success here could perhaps be part of a virtuous cycle of increasing community collaboration in the way Apple engages with open-source. I imagine a lot of developers, like me, would both personally benefit from this and respect Apple for it.
Chromiom is a hostile fork of WebKit. Webkit was a rather polite fork of KHTML, just that they had a team of full time programmers so KHTML couldn't keep up with the upstream requests and gave up since WebKit did a better job anyway.
I personally would LOVE if a corporation did this to any of my open source projects.
And the creator of KHTML is now part of WebKit team at Apple.
Even KDE eventually dropped KHTML in favor of KHTML’s own successor, WebKit-based engines (like QtWebKit and later Qt WebEngine based on Chromium).
A web engine isn’t just software — it needs to keep evolving.
Recognising the value of someone’s work is better than ignoring it and trying to build everything from scratch on your own, Microsoft's Internet Explorer did not last.
Blink is the hostile fork of WebKit. And you would not like if any corporations did this to your Open Source project; on HN alone I see a small army's worth of people who bitch about websites built for Chrome but not Safari. That's how Konquerer users felt back when Apple didn't collaborate downstream, so turnabout is truly fair play.
I find Apple to be very collaborative on OSS - I hacked up a feature I needed in swift-protobuf and over a couple of weeks two Apple engineers and one Google engineer spent a significant amount of time reviewing and helping me out. It was a good result and a great learning experience.
I too am more of a reluctant convert to Mac from Linux. It really does just work most of the time for me in the work context. It allows me to get my job done and not worry because it’s the most supported platform at the office. Shrug. But also the hardware is really really really nice.
I do have a personal MacBook pro that I maxed out (https://gigatexal.blog/pages/new-laptop/new-laptop.html) but I do miss tinkering with my i3 setup and trying out new distos etc. I might get a used thinkpad just for this.
But yeah my Mac personal or work laptop just works and as I get older that’s what I care about more.
Going to try out this container binary from them. Looks interesting.
If you’re looking for a hobby computer, Framework’s laptops are a lot of fun. There’s something about a machine that’s so intentionally designed to be opened up and tinkered with - it’s not my daily driver, but it’s my go to for silly projects now.
That's true, but I always thought of Swift as exceptional in this because Swift is a programming language, and this has become the norm for programming languages in my lifetime.
If my biases are already outdated, I'm happy to learn that. Either way, my hopes are the same. :)
They gave up on CUPS, which was left in limbo for way too long. Now it’s been forked, but I don’t know how successful that fork is.
They took over LLVM by hiring Chris Lattner. It was still a significant investment and they keep pouring resources into it for a long while before it got really widespread adoption. And yes, that project is still going.
Apple is heavily involved in llvm, but so are a several other companies. Most prominently Google, which contributes a huge amount, and much of testing infrastructure. But also Sony and SiFive and others as well.
It’s all very corporate, but also widely distributed and widely owned.
I'm in that camp— I was an Intel Mac user for a decade across three different laptops, and switched to WSL about six years ago. Haven't strongly considered returning.
In addition to the other comments about the fact that this wasn't forced to adopt the GPL, even if it were, there's nothing in the license that forces you to work with the community to take contributions from the public. You can have an entirely closed development process, take no feedback, accept no patches, and release no source code until specifically asked to do so.
Touching Linux would not be enough. It would have to be a derivative work, which this is (probably?) not.
Besides, I think OP wasn't talking about licenses; Apple has a lot of software under FOSS licenses. But usually, with their open-source projects, they reject most incoming contributions and don't really foster a community for them.
At first I thought this sounded like a blend of the virtualisation framework with a firecracker style lightweight kernel.
This project had its own kernel, but it also seems to be able to use the firecracker one. I wonder what the advantages are. Even smaller? Making use of some apple silicon properties?
Has anyone tried it already and is it fast? Compared to podman on Linux or Docker Desktop for Mac?
The advantage is, now there's an Apple team working on it. They will be bothered by their own bugs and hopefully get them fixed.
Virtualization.framework and co was buggy af when introduced and even after a few major macOS versions there are still lots of annoyances, for example the one documented in "Limitations on macOS 15" of this project, or half-assed memory ballooning support.
Hypervisor.framework on the other hand, is mostly okay, but then you need to write a lot more codes. Hypervisor.framework is equivalent to KVM and Virtualization.framework is equivalent to qemu.
Really curious how this improves the filesystem bridging situation (which with Docker Desktop was basically bouncing from "bad" to "worse" and back over the years). Or whether it changes it at all.
I'm just taking a wild guess here, but I'd guess it's not a problem - WSL2 works afaik by having a native ext4 partition, and the Windows kernel accesses it. Intra-OS file perf is great, but using Windows to access Linux files is slow.
MacOS just understands ext4 directly, and should be able to read/write it with no performance penalty.
Has anyone tried turning on nested virt yet? Since the new container CLI spins each container in its own lightweight Linux VM via Virtualization.framework, I’m wondering whether the framework will pass the virtualization extensions through so we can modprobe kvm inside the guest.
Apple’s docs say nested virtualization is only available on M3-class Macs and newer (VZGenericPlatformConfiguration.isNestedVirtualizationSupported)
developer.apple.com, but I don’t see an obvious flag in the container tooling to enable it. Would love to hear if anyone’s managed to get KVM (or even qemu-kvm) running inside one of these VMs.
Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems is not particularly a victory if you look at it from that perspective. Just highlights the piss-poor state of Linux desktop even after all these years. For the average person, it's still terrible on every front and something I still have a hard time recommending when things so often go belly up.
Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".
I am ux designer and forever Mac user. I also try Fedora on random stuff. I am not sure why but last time tried it i got Blender circa 10 years ago vibes from desktop linux gnome.
Everybody has been making fun of Blender forever but they consistently made things better step by step and suddenly few UX enhancements the wind started shift. It completely flipped and now everybody is using it.
I wouldn’t be surprised if desktop Linux days are still ahead. It’s not only Valve and gaming. Many things seems start to work in tandem. Wayland, Pipewire, Flatpack, atomic distros… hey even Gnome is starting to look pretty.
The problem with the Linux desktop isn't usability, it's the lack of corporate control software. Without corporate MDM and antivirus, you'll find it rather annoying to get a native Linux desktop in many companies.
For Windows and MacOS you can throw a few quick bucks over the wall and tick a whole bunch of ISO checkboxes. For Linux, you need more bespoke software customized to your specific needs, and that requires more work. Sure, the mindless checkboxes add nothing to whatever compliance you're actually trying to achieve, but in the end the auditor is coming over with a list of checkboxes that determine whether you pass or not.
I haven't had a Linux system collapse on me for years now thanks to Flatpak and all the other tools that remove the need for scarcely maintained external repositories in my package manager. I find Windows to be an incredible drag to install compared to any other operating system, though. Setup takes forever, updates take even longer, there's a pretty much mandatory cloud login now, and the desktop looks like a KDE distro tweaked to hell (in a bad way).
Gnome's "who needs a start button when there's one on the keyboard" approach may take some getting used to, but Valve's SteamOS shows that if you prevent users from mucking about with the system internals because gary0x136 on Arch Forums said you need to remove all editors but vi, you end up with a pretty stable system.
I'd say that's a fairly web development-centric take. I work at an embedded shop that happily puts a few million cars running Linux on the road every year, and we have hundreds of devs mainly running Linux to develop for Linux.
> Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".
Funnily enough that's how I feel every time I use Windows or Mac. Yet I'm not bold enough to call them "piss poor". I'm pretty sure I - mostly - feel like that because they are different from what I'm used to.
> Just highlights the piss-poor state of Linux desktop even after all these years.
What exactly is wrong with it? I prefer KDE to either Windows or MacOS. Obviously a Linux desktop is not going to be identical to whatever you use so there is a learning curve, but the same is true, and to a much greater extent, for moving from Windows to MacOS.
> layman to sanely develop programs for Linux systems
> or the average person
The "layman" or "average person" does not develop software.
The average person has plenty of problems dealing with Windows. They are just used to putting up with being unable to get things to work. Ran into that (a multi-function printer/scanner not working fully) with someone just yesterday.
If you find it hard to adjust to a Linux desktop you should not be developing software (at any rate not developing software that matters to anyone).
I have switched a lot of people to Linux (my late dad, my ex-wife, my daughter's primary school principal) who preferred it to Windows and my kids grew up using it. No problems.
Linux has not won on the desktop and probably never will, granted. But linux has won for running server-side / headless software, and has done so for years.
That said, counterpoint to my own, Android is Linux and has billions of installations, and SteamOS is Linux. I think the next logical step for SteamOS is desktop PCs, since (anecdotally) gaming PCs only really play games and use a browser or web-tech-based software like Discord. If that does happen, it'll be a huge boost to Linux on the consumer desktop.
> not once have I stopped and thought "huh, this is actually pretty usable and stable".
I think we need to have a specific audience in mind when saying whether or not it's stable. My Arch desktop (user: me) is actually really stable, despite the reputation. I have something that goes sideways maybe once a year or so, and it's a fairly easy fix for me when that does happen. But despite that, I would never give my non-techy parents an Arch desktop. Different users can have different ideas of stable.
I'm not going to jump on you, but for me Linux is much more friendly than Windows or macOS. I tried to use macOS, just because their Apple silicone computers are so powerful, but in the end I abandoned it and switched back to Thinkpad with Linux. Windows is outright unusable and macOS is barely usable for me, while Linux just works.
In my experience, Linux is great for the type of user who would be well-suited with a Chromebook. Stick a browser, office suite and Zoom on it, and enable automatic updates, and they'll be good to go.
FOSS OS dev is slow but is built on cross collaboration so the foundation is strong. Corporate OS has the means to tune to end user usage and can move very fast when business interests align with user experience.
When you are a DE that’s embedded in FOSS no one has an appetite to fund user experience the same way as corporate OS can.
We do have examples where this can work, like with the steam deck/steamOS but it’s almost counter to market incentives because of how slow dev can become.
I see the same problem with chat and protocol adoption. IRC as a protocol is too slow for companies who want to move fast and provide excellent UX, so they ditch cross collaboration in order to move fast.
The moment I read "Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems" I knew this comment would be a big pile of unfactual backed opinions.
Terrible on every front? I'm sorry, but it's hard to take this seriously. I've been daily driving Fedora with Cinnamon for the past 4 years and it works just fine. I use Mac and Windows on a regular basis and both are chock full of AI bloatware and random BS. On the same hardware, Linux absolutely runs circles around Windows 10 and Windows 11. If the application you need to use doesn't run on Linux; well, OK... not much you can do about that. But to promote that grievance to "terrible on every front" is ridiculous.
Meh, you're making the same mistake most do on this one. You're treating the Linux desktop like it's compatible even though these two non-linux operating systems are made by some of the biggest companies ever with allot of engineering hours paid to lock people in.
Plus, one could argue they've actually just established dominance through market lockin by ensuring the culture never had a chance and making operating system moves hard for the normal person.
But more importantly if we instead consider the context that this is largely a collection of small utilities made by volunteers vs huge companies with paid engineering teams, one should be amazed at how comparable they are at all.
On the server room yes, but only in the sense UNIX has won, and Linux is the cheapest way to acquire UNIX, with the BSDs sadly looking from their little corner.
However on embedded, and desktop, the market belongs to others, like Zehyr, NutXX, Arduino, VxWorks, INTEGRITY,... and naturally Apple, Google and Microsoft offerings.
Also Linux is an implementation detail on serverless/lambda deployments, only relevant to infrastructure teams.
BSD has nothing to feel mournful about. Its derivatives are frequently found in the data center, but largely unremarked because it’s under the black box of storage and network appliances.
And it’s in incredible numbers - hundreds of millions of units - of game consoles.
The BSD family isn’t taking a bow in public, that’s all.
Well. It can also be argued that the other two platforms are finding ways to allow using Linux without leaving those platforms, which slows down market share of Linux on desktop as the primary OS.
It makes Linux the common denominator between all platforms, which could potentially mean that it gets adopted as a base platform API like POSIX is/was.
More software gets developed for that base Linux platform API, which makes releasing Linux-native software easier/practically free, which in turn makes desktop Linux an even more viable daily driver platform because you can run the same apps you use on macOS or Windows.
That isn’t exactly new, the hypervisor underneath has been in macOS for years, but poorly exploited. It’s gained a few features but what’s really substantial today are the (much) enhanced ergonomics on top.
I know, but they've invested some effort into e.g. a custom Linux kernel config and vminitd+RPC for this, so the optimizations specific to running containerized Linux apps are new.
Fascinating to me how Windows and Linux have cross-pollinated each other through things like WSL and Proton. Platform convergence might become a thing within our lifetimes.
I made a "long bet" with a friend about a decade ago that by 2030 'Microsoft Windows' would just be a proprietary window manager running on Linux (similar - in broad strokes - to the MacOS model that has Darwin under the hood).
I don't think I'll make my 2030 date at this point but there might be some version of Windows like this at some point.
I also recognize that Windows' need to remain backwards compatible might prevent this, unless there's a Rosetta-style emulation layer to handle all the Win32 APIs etc..
Linux has already won, in the form of Android and to an extent ChromeOS. Many people just don't recognize it as such because most of the system isn't the X11/Wayland desktop stack the "normal" Linux distros use.
Hell, Samsung is delivering Linux to the masses in the form of Wayland + PulseAudio under the brand name "Tizen". Unlike desktop land, Tizen has been all-in on Wayland since 2013 and it's been doing fine.
"It" (aka the cloud providers) has won in the foobar POSIX department such that only a full Linux VM can run your idiosyncractic web apps despite or actually because of hundreds of package managers and dependency resolution and late binding mechanisms, yes.
I'd consider revisiting this. These days you can do studio level video production, graphics and pro audio on Linux using native commercial software from a bare install on modern distributions.
I do pro audio on Linux, my commercial DAWs, VSTs, etc are all Linux-native these days. I don't have to think about anything sound-wise because Pipewire handles it all automatically. IMO, Linux has arrived when it comes to this niche recently, five years ago I'd have to fuck around with JACK, install/compile a realtime kernel and wouldn't have as many DAWs & VSTs available.
Similarly, I have a friend in video production and VFX whose studio uses Linux everywhere. Blender, DaVinci Resolve, etc make that easy.
There is a lack of options when it comes to pro illustration and raster graphics. The Adobe suite reigns supreme there.
> Is it winning if you are the only one playing the game?
Depending on what you mean with "the game", I'd say even more so.
MS/Apple used to villify or ridicule Linux, now they need to distribute it to make their own product whole, because it turns out having an Open Source general purpose OS is so convenient and useful it's been utilized in lots of interesting ways - containers, for example - that the proprietary OS implementations simply weren't available for. I'd say it's a remarkable development.
I need to look into this a little more, but can anyone tell me if this could be used to bundle a Linux container into a MacOS app? I can think of a couple of places that might be useful, for example giving a GPT access to a Linux environment without it having access to run root CLI commands.
Yes, as long as you are okay with your app only working on macOS 26. Otherwise you can already achieve what you want using Virtualization.framework directly, though it'll be a little more work.
Thinking more about this a bit, one immediate issue I see with adoption is that the idea of launching each container in its own VM to fully isolate it and give it its own IP, while neat, doesn't really translate to Linux or Windows. This means if you have a team of developers and a single one of them doesn't have a mac, your local dev model is already broken. So I can't see a way to easily replace Docker/Compose with this.
It translates exactly to Kubernetes though, except without the concept of pods, I don't see anything in this that would stop them adding pods on top later, which would allow Kubernetes or compose like setups (multiple containers in the same pod).
I wonder if this will dramatically improve gaming on a Mac? Valve has been making games more reliable due to Steam Deck, and gaming on Linux is getting better every year.
Could games be run inside a virtual Linux environment, rather than Apple’s Metal or similar tool?
This would also help game developers - now they only need to build for Windows, Linux, and consoles.
As far as I understand, it's a modified/extended version of Wine, which, as the name suggests, is not an emulator (but rather a userspace reimplementation of the Windows API, including a layer that translates DirectX to OpenGL/Vulkan).
The reverse, i.e. running Linux binaries on Windows or macOS, is not easily possible without virtualization, since Linux uses direct syscalls instead of always going through a dynamically linked static library that can take care of compatibility in the way that Wine does. (At the very least, it requires kernel support, like WSL1; Wine is all userspace.)
According to reporting Rosetta will still be supported for old games that rely on Intel code
> But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.
Not necessarily. For example, the Xbox 360 runs every game in a hypervisor, so technically, everything is running in a VM.
It's all a question of using the right/performant hardware interfaces, e.g. IOMMU-based direct hardware access rather than going through software emulation for performance-critical devices.
From that document I read that it in fact does, but it doesn't release memory if app started consuming less. It does memory balooning though, so the VM only consumes as much RAM as the maximum amount requested by the app
It's quite a stretch to go from Apple launching container support for macOS to "they are going to compete with AWS". Especially considering Apple's own server workloads and data storage are mostly on GCP.
It's still virtualization, so it'll necessarily be (slightly) slower than just running Linux natively. I don't think Apple's hardware makes up for that, certainly not at the price point at which they sell it.
Huh. I suppose it’s a good thing I never came around to migrating our team from docker desktop to Orbstack, even though it seems like they pioneered a lot of the Apple implementation perks…
They could replace their underlying implementations with this, and for most users, they wouldn't notice the difference, other than any performance gains.
On a ARM linux target, they do support Rosetta 2 translation of intel binaries under virtualization using Rosetta 2. I do not know if their containerization supports it.
that's nice and all - but where are the native Darwin containers? Why is it ok for Apple to continue squeezing people with macOS CI jobs to keep buying stupid Mac Minis to put in racks only to avoid a mess? Just pull FreeBSD jails!
I would really want to have a macOS (not just Darwin) container, but it seems that it is not possible with macOS. I don't remember the specifics, but there was a discussion here at HN a couple of month ago and someone with intrinsic Darwin knowledge explained why.
Heck even Microsoft managed to run Windows containers on Windows, even with the technical debt and bloat they had. Apple could, they just don't want to because it goes straight against their financial interests
What setup are you comparing this to? In the past silicon Macs plus, say, Rancher Desktop have been happy to pretend to build an x86 image for me, but those images have generally not actually worked for me on actual x86 hardware.
Comparing to Docker for Mac. Running on MBA M2. Building a 5GB image (packaging enterprise software).
Docker for Mac builds it in 4 minutes.
container tool... 17 minutes. Maybe even more. And I did set the cpu and memory for the builder as well to higher number than defaults (similar what Docker for Mac is set for). And in reality it is not the build stage, but "=> exporting to oci image format" that takes forever.
Running containers - have not seen any issues yet.
Forget Linux containers on Mac, as far as I’m concerned that’s already a solved problem. What about Mac containers? We still don’t have a way to run a macOS app with its own process namespace/filesystem in 2025. And with all this AI stuff, there’s a need to minimise blast radius of a rogue app more than ever.
Is there any demand for mac binaries in production? I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware. Linux seems to be the lingua franca of containerized software distribution. Even windows support for containers is sketchy at best
> I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware
If you're a dev team that creates Mac/iOS/iPad/etc apps, you might want Mac hardware in your CI/CD stack. Cloud providers do offer virtual Macs for this purpose.
If you're a really big company (eg. a top-10 app, eg. Google) you might have many teams that push lots of apps or app updates. You might have a CI/CD workflow that needs to scale to a cluster of Macs.
Also, I'm pretty sure apple at least partially uses Apple hardware in the serving flow (eg. "Private Cloud Compute") and would have an interest in making this work.
Oh, and it'd be nice to be able to better sand-box untrusted software running on my personal dev machine.
I don't think the parent was asking for server side macOS containerization, but desktop. It'd be nice to put something like Cursor in a sandbox where it really couldn't rm -rf your home directory. I'd love to do the same thing with every app that comes with an installer.
Looks cool! In the short demo [0] they mention "within a few hundred milliseconds" as VM boot time (I assume?). I wonder how much tweaking they had to do, because this is using the Virtualization.framework, which has been around a while and used by Docker dekstop / Colima / UTM (as an option).
I wonder what the memory overhead is, especially if running multiple containers - as that would spin up multiple VM's.
> Containers achieve sub-second start times using an optimized Linux kernel configuration[0] and a minimal root filesystem with a lightweight init system.
Many developers I know don't use MacOS mainly because they depend on containers and virtualisation is slow, but if Apple can pull off efficient virtualisation and good system integration (port mapping, volumes), then it will eat away at a large share of linux systems.
update: torch for Linux on ARM isn't built with Apple's MPS support so it didn't work with the pip install version. Perhaps it's possible to compile from scratch to have it.
Will this likely have any implications for tools like ‘act’ for running local GitHub actions? I’ve had some trouble running act on apple silicon in the past.
In theory could make it more seamless, so installation instructions didn't include 'you must have a functioning docker engine' etc. - but in practice I assume it's a platform-agnostic non-Swift tool that isn't interested in a macOS-specific framework to make it smoother on just one platform.
> Let's run linux inside a container inside docker inside macos inside an ec2 macos instance inside a aws internal linux host inside a windows pc inside the dreaming mind of a child.
Not even the first non-hyperbolic part of what you wrote is correct. "Container" most often refers to OS-level virtualization on Linux hosts using a combination of cgroups, resource groups, SDN, and some mount magic (among other things). MacOS is BSD-based and therefore doesn't support the first two things in that list. Apple can either write a compatibility shim that emulates this functionality or virtualize the Linux kernel to support it. They chose the latter. There is no Docker involved.
This is a completely sane and smart thing for them to do. Given the choice I'd still much rather run Linux but this brings macOS a step closer to parity with such.
Whenever I have to develop on Windows, I clone my repos and run neovim / docker inside of WSL, for the improved performance (versus copying / mounting files from windows host) and linux. The dev experience is actually pretty good once you get there.
I'm not sure this is the same, though. This feels more like docker for desktop running on a lightweight vm like Colima. Am I wrong?
Wouldn't be surprised if this goes through the same process Windows users did with WSL. Starting out with no systemd, to community-developed systemd-in-a-bottle setups, to proper systemd integration
OCI containers are supposed to be "one container, one PID": at the very least the container's server is PID1 (at times other processes may be spawned but typically the container's main process is going to be PID1).
Containerization is literally the antithesis of systemd.
I think this is purely a checkbox feature to compare against wsl. Otherwise apple just wouldn't get involved (not engineers, who would do lots of good things, but management that let this out)
You only need to expose a docker daemon, which docker compose will use. The daemon is just a unix socket to a process that manages the containers, which is very likely a trivial change on top of the existing container codebase.
For instance, Orbstack implements the docker daemon socket protocol, so despite not being docker, it still allows using docker compose where containers are created inside of Orbstack.
> You need an Apple silicon Mac to build and run Containerization.
> To build the Containerization package, your system needs either:
> macOS 15 or newer and Xcode 26 Beta
> macOS 26 Beta 1 or newer
Those on Intel Macs, this is your last chance to switch to Apple Silicon, (Sequoia was the second last)[0] as macOS Tahoe is the last version to support Intel Macs.
Also, there are some really amazing deals on used/refurb M2 Macs out there. ~$700 for a Macbook Air is a pretty great value, if you can live with 16GB of RAM and a okay but not amazing screen.
$450 for a M4 Mac mini (at Microcenter, but Best Buy will price match) is possibly the best computer hardware deal out there. It is an incredible machine.
Indeed. I just grabbed a mint M3 MBA on ebay for about $950 with a 1TB ssd (which tbh was my main need to upgrade this family member in the first place, as they weren't CPU-bound on the old M1). Wild deals to be had!
Gathering this information and putting together a distro to rescue old Macbooks from the e-waste bin would be a worthwhile project. As far as I can tell they're great hardware.
I imagine things get harder once you get into the USB-C era.
Video about it here: https://developer.apple.com/videos/play/wwdc2025/346/
Looks like each container gets its own lightweight Linux VM.
Can take it for a spin by downloading the container tool from here: https://github.com/apple/container/releases (needs macOS 26)
The submission is about https://news.ycombinator.com/item?id=44229239)
The former is the framework enabling Linux containers on lightweight VMs and the latter is a tool using that framework.
> Looks like each container gets its own lightweight Linux VM.
That sounds pretty heavyweight. A project with 12 containers will run 12 kernels instead of 1?
Curious to see metrics on this approach.
This is the approach used by Kata Containers/Firecracker. It's not much heavier than the shared kernel approach, but has significantly better security. An bug in the container runtime doesn't immediately break the separation between containers.
The performance overhead of the VM is minimal, the main tradeoffs is container startup time.
7 replies →
Is that not the premise of docker?
16 replies →
I could imagine one Linux kernel running in a VM (on top of MacOS) and then containers inside that host OS. So 1 base instance (MacOS), 1 hypervisor (Linux L0), 12 containers (using that L0 kernel).
1 reply →
Also works on macOS 15, but they mentioned that some networking features will be limited.
Shoutout to Michael Crosby, the person in this video, who was instrumental in getting Open Containers (https://opencontainers.org) to v1.0. He was a steady and calm force through a very rocky process.
"A new report from Protocol today details that Apple has gone on a cloud computing hiring spree over the last few months... Michael Crosby, one of a handful of ex-Docker engineers to join Apple this year. Michael is who we can thank for containers as they exist today. He was the powerhouse engineer behind all of it, said a former colleague who asked to remain anonymous."
https://9to5mac.com/2020/05/11/apple-cloud-computing/
1 reply →
I would assume that "lightweight" in this case means that they share a single Linux kernel. Or that there is an emulation layer that maps the Linux Kernel API to macOS. In any case, I don't think that they are running a Linux kernel per container.
You don’t have to assume, the docs in the repo tell you that it does run a Linux kernel in each VM. It’s one container per VM.
1 reply →
"Lightweight" in the sense that the VM contains one static executable that runs the container, and not a full fledged Ubuntu VM (e.g. Colima).
It seems to work on macOS 15 as well, with some limitations[0].
[0] https://github.com/apple/container/blob/main/docs/technical-...
interesting choice - doesn't that then mean that container to container integration is going to be harder and a lot of overhead per-container? I would have thought a shared VM made more sense. I wonder what attracted them to this.
It seems great from a security perspective, and a little bit nice from a networking perspective.
2 replies →
I like the security aspect. Maybe DNS works, and you can use that for communication between containers?
> Looks like each container gets its own lightweight Linux VM.
We're through the looking glass here, people
"Containers" now apparently means "boot a docker image as an ephemeral VM."
Which isn't such a bad idea really.
"Looks like each container gets its own lightweight Linux VM."
Not a container "as such" then.
How hard is it to emulate linux system calls?
> How hard is it to emulate linux system calls?
It’s doable but a lot more effort. Microsoft did it with WSL1 and abandoned it with WSL2.
20 replies →
> How hard is it to emulate linux system calls?
FreeBSD has linuxulator and illumos comes with lx-zones that allow running some native linux binaries inside a "container". No idea why Apple didn't go for similar option.
3 replies →
syscalls are just a fraction of the surface area. There are many files in many different vfs you need to implement, things like selinux and ebpf, iouring, etc. It's also a constantly shifting target. The VM API is much simpler, relatively stable, and already implemented.
Emulating Linux only makes sense on devices with constrained resources.
> How hard is it to emulate linux system calls?
Just replace the XNU kernel with Linux already.
The CLI from the press release/WWDC session is at https://github.com/apple/container which I think is what many like myself would be interested in. I was hoping this'd be shipped with the newest Xcode Beta but that doesn't seem to be the case. Prebuilt packages are missing at the moment but they are working on it: https://github.com/apple/container/issues/54
Seems prebuilt packages were released exactly one minute after your comment: https://github.com/apple/container/releases/tag/0.1.0
Beat me to it, thanks!
Discussed at https://news.ycombinator.com/item?id=44229239
Wonder how Docker feels about this. I'd assume a decent amount of Docker for Desktop is on Mac...
Well it makes developing Docker Desktop infinitely easier for them, since they no longer need to start their own Linux VM under the hood. I think the software is "sticky" enough that people will still prefer to use Docker Desktop for the familiar CLI and UX, Docker Compose, and all the Docker-specific quirks that make migrating to a different container runtime basically impossible.
Docker Desktop on Windows uses WSL to provide the Docker daemon, doesn't it? So Docker Desktop has a history of leaning into OS offerings for virtualizing Linux like this where they can.
1 reply →
I never used docker desktop and am struggling to understand what you are supposed to be doing with a gui in a docker/container context.
22 replies →
Unless this provides an extremely compatible Docker socket implementation, this is the answer. When Docker changed the licensing for Docker Desktop, my previous employer made it harder to get permission. However, there were a few tools that were in common usage, and once you mentioned that you used them, you got your permission.
Some progress has been made to create a non-Docker implementation that integrates with all those random tools that expect to be able to yeet bytes into or out of the Docker socket, but I still hit blockers the last time I tried.
This doesn't compete with Docker for Desktop, as more low-level than that.
Docker for Desktop sits on-top of container/virtualization software (Hypervisor.framework and QEMU on Mac, WSL on Windows, containerd on Linux). So there's a good chance that future versions of Docker for Desktop will use this library, but they don't really compete with each other.
Probably about the same way they feel about podman.
I guess it'll depend on whether or not this starts shipping by default with newacOS installs.
If it doesn't, then it's still a toss-up whether or not user chooses docker/podman/this...etc.
If it ends up shipping by default and is largely compatible with the same command line flags and socket API... Then docker has a problem.
For what it's worth, I prefer podman but even on Linux where the differentiators should be close to zero, I still find certain things that only docker does.
Podman is fairly niche. This is an Apple product that Apple developer circles will push hard.
11 replies →
Docker Desktop is closed source proprietary software and this is free software, so this is a win (for us, at least).
Also the second they started charging podman dev picked up and that has gotten real good.
This is the most surprising and interesting part, imo:
> Contributions to `container` are welcomed and encouraged. Please see our main contributing guide for more information.
This is quite unusual for Apple, isn't it? WebKit was basically a hostile fork of KHTML, Darwin has been basically been something they throw parts of over the wall every now and then, etc.
I hope this and other projects Apple has recently put up on GitHub see fruitful collaboration from user-developers.
I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux. Over the past couple of years, Apple Silicon has convinced me to use an Apple computer as my main laptop at home (nowadays more comparable, Linux-friendly alternatives seem closer now than when I got my personal MacBook, and I'm still excited for them). This kind of thing seems like a positive change that lets me feel less conflicted.
Anyway, success here could perhaps be part of a virtuous cycle of increasing community collaboration in the way Apple engages with open-source. I imagine a lot of developers, like me, would both personally benefit from this and respect Apple for it.
> WebKit was basically a hostile fork of KHTML
Chromiom is a hostile fork of WebKit. Webkit was a rather polite fork of KHTML, just that they had a team of full time programmers so KHTML couldn't keep up with the upstream requests and gave up since WebKit did a better job anyway.
I personally would LOVE if a corporation did this to any of my open source projects.
And the creator of KHTML is now part of WebKit team at Apple.
Even KDE eventually dropped KHTML in favor of KHTML’s own successor, WebKit-based engines (like QtWebKit and later Qt WebEngine based on Chromium).
A web engine isn’t just software — it needs to keep evolving.
Recognising the value of someone’s work is better than ignoring it and trying to build everything from scratch on your own, Microsoft's Internet Explorer did not last.
Blink is the hostile fork of WebKit. And you would not like if any corporations did this to your Open Source project; on HN alone I see a small army's worth of people who bitch about websites built for Chrome but not Safari. That's how Konquerer users felt back when Apple didn't collaborate downstream, so turnabout is truly fair play.
8 replies →
> WebKit was basically a hostile fork of KHTML...
WebKit has been a fully proper open source project - with open bug tracker, patch review, commit history, etc - since 2005.
Swift has been a similarly open project since 2015.
Timeline-wise, a new high profile open source effort in 2025 checks out.
FoundationDB is a fully proper open source project since 2018…
I find Apple to be very collaborative on OSS - I hacked up a feature I needed in swift-protobuf and over a couple of weeks two Apple engineers and one Google engineer spent a significant amount of time reviewing and helping me out. It was a good result and a great learning experience.
I too am more of a reluctant convert to Mac from Linux. It really does just work most of the time for me in the work context. It allows me to get my job done and not worry because it’s the most supported platform at the office. Shrug. But also the hardware is really really really nice.
I do have a personal MacBook pro that I maxed out (https://gigatexal.blog/pages/new-laptop/new-laptop.html) but I do miss tinkering with my i3 setup and trying out new distos etc. I might get a used thinkpad just for this.
But yeah my Mac personal or work laptop just works and as I get older that’s what I care about more.
Going to try out this container binary from them. Looks interesting.
If you’re looking for a hobby computer, Framework’s laptops are a lot of fun. There’s something about a machine that’s so intentionally designed to be opened up and tinkered with - it’s not my daily driver, but it’s my go to for silly projects now.
It's not that surprising. Much of Swift and its frameworks are contributed by the open source community.
That's true, but I always thought of Swift as exceptional in this because Swift is a programming language, and this has become the norm for programming languages in my lifetime.
If my biases are already outdated, I'm happy to learn that. Either way, my hopes are the same. :)
2 replies →
Apple has a lot of good stuff out there doesn't it? Aren't llvm and cups theirs more or less?
They gave up on CUPS, which was left in limbo for way too long. Now it’s been forked, but I don’t know how successful that fork is.
They took over LLVM by hiring Chris Lattner. It was still a significant investment and they keep pouring resources into it for a long while before it got really widespread adoption. And yes, that project is still going.
4 replies →
Apple is heavily involved in llvm, but so are a several other companies. Most prominently Google, which contributes a huge amount, and much of testing infrastructure. But also Sony and SiFive and others as well.
It’s all very corporate, but also widely distributed and widely owned.
> I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux
I suspect this move was designed to stop losing people like you to WSL.
As a long-time Linux user, I can confidently say that the experience of using a M1 Pro is significantly superior to WSL on Windows!
I can happily use my Mac as my primary machine without much hassle, just like I would often do with WSL.
I'm in that camp— I was an Intel Mac user for a decade across three different laptops, and switched to WSL about six years ago. Haven't strongly considered returning.
> I suspect this move was designed to stop losing people like you to WSL.
I am also thinking the same, Docker desktop experience was not that great at least on Intel Macs
Since this is touching Linux, and Linux is copy left, they _have_ to do this.
In addition to the other comments about the fact that this wasn't forced to adopt the GPL, even if it were, there's nothing in the license that forces you to work with the community to take contributions from the public. You can have an entirely closed development process, take no feedback, accept no patches, and release no source code until specifically asked to do so.
They don't have to do literally any of this.
1 reply →
Touching Linux would not be enough. It would have to be a derivative work, which this is (probably?) not.
Besides, I think OP wasn't talking about licenses; Apple has a lot of software under FOSS licenses. But usually, with their open-source projects, they reject most incoming contributions and don't really foster a community for them.
2 replies →
If the license of this project were determined by obligations to the Linux kernel, it would be GPLv2, not Apache License 2.0!
The comment was about them welcoming contributions, not making it open source.
At first I thought this sounded like a blend of the virtualisation framework with a firecracker style lightweight kernel.
This project had its own kernel, but it also seems to be able to use the firecracker one. I wonder what the advantages are. Even smaller? Making use of some apple silicon properties?
Has anyone tried it already and is it fast? Compared to podman on Linux or Docker Desktop for Mac?
The advantage is, now there's an Apple team working on it. They will be bothered by their own bugs and hopefully get them fixed.
Virtualization.framework and co was buggy af when introduced and even after a few major macOS versions there are still lots of annoyances, for example the one documented in "Limitations on macOS 15" of this project, or half-assed memory ballooning support.
Hypervisor.framework on the other hand, is mostly okay, but then you need to write a lot more codes. Hypervisor.framework is equivalent to KVM and Virtualization.framework is equivalent to qemu.
> They will be bothered by their own bugs
Laughs in Xcode
And QEMU on macOS uses Virtualization.framework for hardware virtualisation.
Really curious how this improves the filesystem bridging situation (which with Docker Desktop was basically bouncing from "bad" to "worse" and back over the years). Or whether it changes it at all.
I'm just taking a wild guess here, but I'd guess it's not a problem - WSL2 works afaik by having a native ext4 partition, and the Windows kernel accesses it. Intra-OS file perf is great, but using Windows to access Linux files is slow.
MacOS just understands ext4 directly, and should be able to read/write it with no performance penalty.
I would imagine it is low lift - using https://developer.apple.com/documentation/virtualization/sha... which is already built into the OS
If they wanted to improve the situation they would’ve needed to ship an apfs driver and a Linux kernel. Sadly they didn’t.
AFPS isn't the solution since you can't have two kernels accessing the same FS. The solution is probably something like virtio-fs with DAX.
I wonder how it compares to orbstack
Has anyone tried turning on nested virt yet? Since the new container CLI spins each container in its own lightweight Linux VM via Virtualization.framework, I’m wondering whether the framework will pass the virtualization extensions through so we can modprobe kvm inside the guest.
Apple’s docs say nested virtualization is only available on M3-class Macs and newer (VZGenericPlatformConfiguration.isNestedVirtualizationSupported) developer.apple.com, but I don’t see an obvious flag in the container tooling to enable it. Would love to hear if anyone’s managed to get KVM (or even qemu-kvm) running inside one of these VMs.
So both of the other big two desktop OSs now have official mechanisms to run Linux VMs to host Linux-native applications.
You can make some kind of argument from this that Linux has won; certainly the Linux syscall API is now perhaps the most ubiquitous application API.
> Linux has won
Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems is not particularly a victory if you look at it from that perspective. Just highlights the piss-poor state of Linux desktop even after all these years. For the average person, it's still terrible on every front and something I still have a hard time recommending when things so often go belly up.
Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".
I am ux designer and forever Mac user. I also try Fedora on random stuff. I am not sure why but last time tried it i got Blender circa 10 years ago vibes from desktop linux gnome.
Everybody has been making fun of Blender forever but they consistently made things better step by step and suddenly few UX enhancements the wind started shift. It completely flipped and now everybody is using it.
I wouldn’t be surprised if desktop Linux days are still ahead. It’s not only Valve and gaming. Many things seems start to work in tandem. Wayland, Pipewire, Flatpack, atomic distros… hey even Gnome is starting to look pretty.
8 replies →
The problem with the Linux desktop isn't usability, it's the lack of corporate control software. Without corporate MDM and antivirus, you'll find it rather annoying to get a native Linux desktop in many companies.
For Windows and MacOS you can throw a few quick bucks over the wall and tick a whole bunch of ISO checkboxes. For Linux, you need more bespoke software customized to your specific needs, and that requires more work. Sure, the mindless checkboxes add nothing to whatever compliance you're actually trying to achieve, but in the end the auditor is coming over with a list of checkboxes that determine whether you pass or not.
I haven't had a Linux system collapse on me for years now thanks to Flatpak and all the other tools that remove the need for scarcely maintained external repositories in my package manager. I find Windows to be an incredible drag to install compared to any other operating system, though. Setup takes forever, updates take even longer, there's a pretty much mandatory cloud login now, and the desktop looks like a KDE distro tweaked to hell (in a bad way).
Gnome's "who needs a start button when there's one on the keyboard" approach may take some getting used to, but Valve's SteamOS shows that if you prevent users from mucking about with the system internals because gary0x136 on Arch Forums said you need to remove all editors but vi, you end up with a pretty stable system.
7 replies →
I'd say that's a fairly web development-centric take. I work at an embedded shop that happily puts a few million cars running Linux on the road every year, and we have hundreds of devs mainly running Linux to develop for Linux.
4 replies →
> Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".
Funnily enough that's how I feel every time I use Windows or Mac. Yet I'm not bold enough to call them "piss poor". I'm pretty sure I - mostly - feel like that because they are different from what I'm used to.
3 replies →
> Just highlights the piss-poor state of Linux desktop even after all these years.
What exactly is wrong with it? I prefer KDE to either Windows or MacOS. Obviously a Linux desktop is not going to be identical to whatever you use so there is a learning curve, but the same is true, and to a much greater extent, for moving from Windows to MacOS.
> layman to sanely develop programs for Linux systems
> or the average person
The "layman" or "average person" does not develop software.
The average person has plenty of problems dealing with Windows. They are just used to putting up with being unable to get things to work. Ran into that (a multi-function printer/scanner not working fully) with someone just yesterday.
If you find it hard to adjust to a Linux desktop you should not be developing software (at any rate not developing software that matters to anyone).
I have switched a lot of people to Linux (my late dad, my ex-wife, my daughter's primary school principal) who preferred it to Windows and my kids grew up using it. No problems.
7 replies →
Linux has not won on the desktop and probably never will, granted. But linux has won for running server-side / headless software, and has done so for years.
That said, counterpoint to my own, Android is Linux and has billions of installations, and SteamOS is Linux. I think the next logical step for SteamOS is desktop PCs, since (anecdotally) gaming PCs only really play games and use a browser or web-tech-based software like Discord. If that does happen, it'll be a huge boost to Linux on the consumer desktop.
> not once have I stopped and thought "huh, this is actually pretty usable and stable".
I think we need to have a specific audience in mind when saying whether or not it's stable. My Arch desktop (user: me) is actually really stable, despite the reputation. I have something that goes sideways maybe once a year or so, and it's a fairly easy fix for me when that does happen. But despite that, I would never give my non-techy parents an Arch desktop. Different users can have different ideas of stable.
3 replies →
I'm not going to jump on you, but for me Linux is much more friendly than Windows or macOS. I tried to use macOS, just because their Apple silicone computers are so powerful, but in the end I abandoned it and switched back to Thinkpad with Linux. Windows is outright unusable and macOS is barely usable for me, while Linux just works.
In my experience, Linux is great for the type of user who would be well-suited with a Chromebook. Stick a browser, office suite and Zoom on it, and enable automatic updates, and they'll be good to go.
4 replies →
FOSS OS dev is slow but is built on cross collaboration so the foundation is strong. Corporate OS has the means to tune to end user usage and can move very fast when business interests align with user experience.
When you are a DE that’s embedded in FOSS no one has an appetite to fund user experience the same way as corporate OS can.
We do have examples where this can work, like with the steam deck/steamOS but it’s almost counter to market incentives because of how slow dev can become.
I see the same problem with chat and protocol adoption. IRC as a protocol is too slow for companies who want to move fast and provide excellent UX, so they ditch cross collaboration in order to move fast.
The moment I read "Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems" I knew this comment would be a big pile of unfactual backed opinions.
Fedora/Debian + AMD ThinkPad here. Haven't had any crashes or instability in 5+ years.
Terrible on every front? I'm sorry, but it's hard to take this seriously. I've been daily driving Fedora with Cinnamon for the past 4 years and it works just fine. I use Mac and Windows on a regular basis and both are chock full of AI bloatware and random BS. On the same hardware, Linux absolutely runs circles around Windows 10 and Windows 11. If the application you need to use doesn't run on Linux; well, OK... not much you can do about that. But to promote that grievance to "terrible on every front" is ridiculous.
Meh, you're making the same mistake most do on this one. You're treating the Linux desktop like it's compatible even though these two non-linux operating systems are made by some of the biggest companies ever with allot of engineering hours paid to lock people in.
Plus, one could argue they've actually just established dominance through market lockin by ensuring the culture never had a chance and making operating system moves hard for the normal person.
But more importantly if we instead consider the context that this is largely a collection of small utilities made by volunteers vs huge companies with paid engineering teams, one should be amazed at how comparable they are at all.
I disagree. The only feature I miss on Linux is the ctrl-scroll to zoom feature of macOS.
If Gnome implemented that as well as macOS does I’d happily switch permanently.
2 replies →
On the server room yes, but only in the sense UNIX has won, and Linux is the cheapest way to acquire UNIX, with the BSDs sadly looking from their little corner.
However on embedded, and desktop, the market belongs to others, like Zehyr, NutXX, Arduino, VxWorks, INTEGRITY,... and naturally Apple, Google and Microsoft offerings.
Also Linux is an implementation detail on serverless/lambda deployments, only relevant to infrastructure teams.
BSD has nothing to feel mournful about. Its derivatives are frequently found in the data center, but largely unremarked because it’s under the black box of storage and network appliances.
And it’s in incredible numbers - hundreds of millions of units - of game consoles.
The BSD family isn’t taking a bow in public, that’s all.
6 replies →
Well. It can also be argued that the other two platforms are finding ways to allow using Linux without leaving those platforms, which slows down market share of Linux on desktop as the primary OS.
> which slows down market share of Linux on desktop as the primary OS
I think what slows down market share of Linux on desktop is Linux on desktop itself.
I use Linux, and I understand that it's a very hard job to take it to the level of Windows or macOS, but it is what it is.
It makes Linux the common denominator between all platforms, which could potentially mean that it gets adopted as a base platform API like POSIX is/was.
More software gets developed for that base Linux platform API, which makes releasing Linux-native software easier/practically free, which in turn makes desktop Linux an even more viable daily driver platform because you can run the same apps you use on macOS or Windows.
8 replies →
That isn’t exactly new, the hypervisor underneath has been in macOS for years, but poorly exploited. It’s gained a few features but what’s really substantial today are the (much) enhanced ergonomics on top.
I know, but they've invested some effort into e.g. a custom Linux kernel config and vminitd+RPC for this, so the optimizations specific to running containerized Linux apps are new.
Fascinating to me how Windows and Linux have cross-pollinated each other through things like WSL and Proton. Platform convergence might become a thing within our lifetimes.
I made a "long bet" with a friend about a decade ago that by 2030 'Microsoft Windows' would just be a proprietary window manager running on Linux (similar - in broad strokes - to the MacOS model that has Darwin under the hood).
I don't think I'll make my 2030 date at this point but there might be some version of Windows like this at some point.
I also recognize that Windows' need to remain backwards compatible might prevent this, unless there's a Rosetta-style emulation layer to handle all the Win32 APIs etc..
2 replies →
Linux has already won, in the form of Android and to an extent ChromeOS. Many people just don't recognize it as such because most of the system isn't the X11/Wayland desktop stack the "normal" Linux distros use.
Hell, Samsung is delivering Linux to the masses in the form of Wayland + PulseAudio under the brand name "Tizen". Unlike desktop land, Tizen has been all-in on Wayland since 2013 and it's been doing fine.
Google could replace Linux kernel with something else and no one would notice, other than OEMs and people rooting their devices.
Likewise with ChromeOS.
They are Pyrrhic victories.
As for Tizen, interesting that Samsung hasn't yet completely lost interest on it.
4 replies →
HarmonyOS has it's own non Linux Kernel so Linux now has a major competitor that will be present in a huge number of devices.
https://en.m.wikipedia.org/wiki/HarmonyOS_NEXT
"It" (aka the cloud providers) has won in the foobar POSIX department such that only a full Linux VM can run your idiosyncractic web apps despite or actually because of hundreds of package managers and dependency resolution and late binding mechanisms, yes.
Except for graphics, audio, and GUIs for which no good solutions exist
I'd consider revisiting this. These days you can do studio level video production, graphics and pro audio on Linux using native commercial software from a bare install on modern distributions.
I do pro audio on Linux, my commercial DAWs, VSTs, etc are all Linux-native these days. I don't have to think about anything sound-wise because Pipewire handles it all automatically. IMO, Linux has arrived when it comes to this niche recently, five years ago I'd have to fuck around with JACK, install/compile a realtime kernel and wouldn't have as many DAWs & VSTs available.
Similarly, I have a friend in video production and VFX whose studio uses Linux everywhere. Blender, DaVinci Resolve, etc make that easy.
There is a lack of options when it comes to pro illustration and raster graphics. The Adobe suite reigns supreme there.
4 replies →
Is it winning if you are the only one playing the game?
Brag about this to an average Windows or Mac user and they will go "huh?" and "what is Linux?"
> Is it winning if you are the only one playing the game?
Depending on what you mean with "the game", I'd say even more so.
MS/Apple used to villify or ridicule Linux, now they need to distribute it to make their own product whole, because it turns out having an Open Source general purpose OS is so convenient and useful it's been utilized in lots of interesting ways - containers, for example - that the proprietary OS implementations simply weren't available for. I'd say it's a remarkable development.
By that logic, this feature and WSL shouldn't exist.
8 replies →
'Linux with macOS.'
[flagged]
I need to look into this a little more, but can anyone tell me if this could be used to bundle a Linux container into a MacOS app? I can think of a couple of places that might be useful, for example giving a GPT access to a Linux environment without it having access to run root CLI commands.
Yes, as long as you are okay with your app only working on macOS 26. Otherwise you can already achieve what you want using Virtualization.framework directly, though it'll be a little more work.
Yes, that's exactly what it's for.
Thinking more about this a bit, one immediate issue I see with adoption is that the idea of launching each container in its own VM to fully isolate it and give it its own IP, while neat, doesn't really translate to Linux or Windows. This means if you have a team of developers and a single one of them doesn't have a mac, your local dev model is already broken. So I can't see a way to easily replace Docker/Compose with this.
It translates exactly to Kubernetes though, except without the concept of pods, I don't see anything in this that would stop them adding pods on top later, which would allow Kubernetes or compose like setups (multiple containers in the same pod).
I wonder if this will dramatically improve gaming on a Mac? Valve has been making games more reliable due to Steam Deck, and gaming on Linux is getting better every year.
Could games be run inside a virtual Linux environment, rather than Apple’s Metal or similar tool?
This would also help game developers - now they only need to build for Windows, Linux, and consoles.
Apple's Virtualization Framework doesn't support 3D acceleration for non-macOS guests.
Isn't the Linux gaming stuff really an emulator for Windows games? So it'd be like, windows emulation inside Linux virtualization inside macos?
As far as I understand, it's a modified/extended version of Wine, which, as the name suggests, is not an emulator (but rather a userspace reimplementation of the Windows API, including a layer that translates DirectX to OpenGL/Vulkan).
The reverse, i.e. running Linux binaries on Windows or macOS, is not easily possible without virtualization, since Linux uses direct syscalls instead of always going through a dynamically linked static library that can take care of compatibility in the way that Wine does. (At the very least, it requires kernel support, like WSL1; Wine is all userspace.)
No, and with sunset of Rosetta, they'll kill off many of the few games that fun on macOS.
According to reporting Rosetta will still be supported for old games that rely on Intel code
> But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.
https://arstechnica.com/gadgets/2025/06/apple-details-the-en...
1 reply →
Windows games already run on macOS via WINE. Using a VM would just add overhead not reduce it.
I imagine running in a VM would hurt performance a lot.
Not necessarily. For example, the Xbox 360 runs every game in a hypervisor, so technically, everything is running in a VM.
It's all a question of using the right/performant hardware interfaces, e.g. IOMMU-based direct hardware access rather than going through software emulation for performance-critical devices.
Does anyone know whether they have optimized memory management, i.e. virt machine not consuming more RAM than required?
Not yet: https://github.com/apple/container/blob/main/docs/technical-...
From that document I read that it in fact does, but it doesn't release memory if app started consuming less. It does memory balooning though, so the VM only consumes as much RAM as the maximum amount requested by the app
In my opinion this is a step towards the Apple cloud hosting.
They have Xcode cloud.
The $4B contract with Amazon ends, and it’s highly profitable.
Build a container, deploy on Apple, perhaps with access to their CPU’s
It's quite a stretch to go from Apple launching container support for macOS to "they are going to compete with AWS". Especially considering Apple's own server workloads and data storage are mostly on GCP.
It's still virtualization, so it'll necessarily be (slightly) slower than just running Linux natively. I don't think Apple's hardware makes up for that, certainly not at the price point at which they sell it.
Compared to EC2? You've got to be kidding me.
Yeah that would be great. I dont understand why they dont explore this option
I wonder how this will affect apps like Orbstack
My guess is that Orbstack might switch to using this, and it'll just be a more competitive space with better open source options popping up.
People still want the nice UI/UX, and this is just a Swift package.
Orbstack also does kubernetes etc
Huh. I suppose it’s a good thing I never came around to migrating our team from docker desktop to Orbstack, even though it seems like they pioneered a lot of the Apple implementation perks…
I still haven't heard why anyone would prefer the new Apple-proprietary thing vs Orbstack. I would not hold my breath on it being better.
6 replies →
They could replace their underlying implementations with this, and for most users, they wouldn't notice the difference, other than any performance gains.
So the x64 containers will run fine on Apple Silicon?
On a ARM linux target, they do support Rosetta 2 translation of intel binaries under virtualization using Rosetta 2. I do not know if their containerization supports it.
https://developer.apple.com/documentation/virtualization/run...
Given that they announced a timeline for sunsetting Rosetta 2, it may be low priority.
x64 is not going away anytime soon, so that’s unfortunate
that's nice and all - but where are the native Darwin containers? Why is it ok for Apple to continue squeezing people with macOS CI jobs to keep buying stupid Mac Minis to put in racks only to avoid a mess? Just pull FreeBSD jails!
You can run macos VMs at the very least https://developer.apple.com/documentation/virtualization/run...
Sure but they don't really scale due Apple's refusal to provide a headless version with WindowServer off
Also the EULA limits you to just two VMs per computer and only for very specific purposes. Clearly because they want you to buy their damn computers
This is my pain point.
I would really want to have a macOS (not just Darwin) container, but it seems that it is not possible with macOS. I don't remember the specifics, but there was a discussion here at HN a couple of month ago and someone with intrinsic Darwin knowledge explained why.
> it's not possible
Heck even Microsoft managed to run Windows containers on Windows, even with the technical debt and bloat they had. Apple could, they just don't want to because it goes straight against their financial interests
Not sure what exactly is happening, but feels very slow. Builds are taking way longer. Tried to run builder with -c and -m to add more CPU and memory.
What setup are you comparing this to? In the past silicon Macs plus, say, Rancher Desktop have been happy to pretend to build an x86 image for me, but those images have generally not actually worked for me on actual x86 hardware.
Comparing to Docker for Mac. Running on MBA M2. Building a 5GB image (packaging enterprise software).
Docker for Mac builds it in 4 minutes.
container tool... 17 minutes. Maybe even more. And I did set the cpu and memory for the builder as well to higher number than defaults (similar what Docker for Mac is set for). And in reality it is not the build stage, but "=> exporting to oci image format" that takes forever.
Running containers - have not seen any issues yet.
Forget Linux containers on Mac, as far as I’m concerned that’s already a solved problem. What about Mac containers? We still don’t have a way to run a macOS app with its own process namespace/filesystem in 2025. And with all this AI stuff, there’s a need to minimise blast radius of a rogue app more than ever.
Is there any demand for mac binaries in production? I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware. Linux seems to be the lingua franca of containerized software distribution. Even windows support for containers is sketchy at best
> I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware
If you're a dev team that creates Mac/iOS/iPad/etc apps, you might want Mac hardware in your CI/CD stack. Cloud providers do offer virtual Macs for this purpose.
If you're a really big company (eg. a top-10 app, eg. Google) you might have many teams that push lots of apps or app updates. You might have a CI/CD workflow that needs to scale to a cluster of Macs.
Also, I'm pretty sure apple at least partially uses Apple hardware in the serving flow (eg. "Private Cloud Compute") and would have an interest in making this work.
Oh, and it'd be nice to be able to better sand-box untrusted software running on my personal dev machine.
2 replies →
I don't think the parent was asking for server side macOS containerization, but desktop. It'd be nice to put something like Cursor in a sandbox where it really couldn't rm -rf your home directory. I'd love to do the same thing with every app that comes with an installer.
3 replies →
I think at one point (many years ago) I read that imgix.com was using macs for their image processing CDN nodes.
In my experience, the only use case for cloud macs is CI/CD (and boy does it suck to use macOS in the cloud).
Mm... AppStore and Gatekeeper?
This does not support memory ballooning yet. But they have documented custom kernel support, so, goodbye OrbStack.
Orbstack is docker. People might still prefer docker.
Looks cool! In the short demo [0] they mention "within a few hundred milliseconds" as VM boot time (I assume?). I wonder how much tweaking they had to do, because this is using the Virtualization.framework, which has been around a while and used by Docker dekstop / Colima / UTM (as an option).
I wonder what the memory overhead is, especially if running multiple containers - as that would spin up multiple VM's.
[0]: https://developer.apple.com/videos/play/wwdc2025/346 10:10 and forwards
They include the kernel config here[0]
> Containers achieve sub-second start times using an optimized Linux kernel configuration[0] and a minimal root filesystem with a lightweight init system.
[0]: https://github.com/apple/containerization/blob/main/kernel/c...
Related ongoing threads:
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44226978 - June 2025 (345 comments)
(Normally we'd merge them but it seems there are significant if subtle differences)
I hope it will support nested virtualization.
This is really bad news for Linux on Desktop.
Many developers I know don't use MacOS mainly because they depend on containers and virtualisation is slow, but if Apple can pull off efficient virtualisation and good system integration (port mapping, volumes), then it will eat away at a large share of linux systems.
Apple please expose GPU cores to the VMs.
I've used pytorch successfully in a MacOS VM on MacOS using https://tart.run/ so I'd expect it to work here too.
update: torch for Linux on ARM isn't built with Apple's MPS support so it didn't work with the pip install version. Perhaps it's possible to compile from scratch to have it.
You can use libkrun to pretty much do the same thing.
This is great. Also about time, etc.
But is it also finally time to fix dtrace on MacOS[0]?
[0]: https://developer.apple.com/forums/thread/735939?answerId=76...
Spoiler alert: it’s not containers.
It’s some nice tooling wrapped around lightweight VMs, so basically WSL2
Are the lightweight VMs running containers?
WSL1, rather.
I’m already running docker on macOS what’s the difference?
Will this likely have any implications for tools like ‘act’ for running local GitHub actions? I’ve had some trouble running act on apple silicon in the past.
In theory could make it more seamless, so installation instructions didn't include 'you must have a functioning docker engine' etc. - but in practice I assume it's a platform-agnostic non-Swift tool that isn't interested in a macOS-specific framework to make it smoother on just one platform.
Them synthesizing an EXT4 file system from tarball layers instead of something like EROFS is so extremely odd. Really really strange design.
Surprising to me that this uses swift CLI tools (free software) and make, not Xcode.
Containers are mainly for CI+testing and for Linux workflows, so xcodebuild is not really an option.
Xcode also has command line tools that can do the same.
Obtaining and using Xcode requires submitting to an additional license contract from Apple. Swift and Make do not.
5 replies →
And when native OCI macos container engine native ?!
Is this basically the same thing as Orbstack?
Terrible name. Look like a neat product though!
Tailored Swift would be better
TAYNE (short for conTAYNEr): https://www.youtube.com/watch?v=a8K6QUPmv8Q
Prefer the Nix approach unless a container approach is absolutely necessary.
This is just wsl2 from Microsoft, albeit with an Apple spin
disappointing theres still no namespacing in darwin for macos containers. would be great to run xcodebuild in a container
[dead]
[flagged]
> Let's run linux inside a container inside docker inside macos inside an ec2 macos instance inside a aws internal linux host inside a windows pc inside the dreaming mind of a child.
Not even the first non-hyperbolic part of what you wrote is correct. "Container" most often refers to OS-level virtualization on Linux hosts using a combination of cgroups, resource groups, SDN, and some mount magic (among other things). MacOS is BSD-based and therefore doesn't support the first two things in that list. Apple can either write a compatibility shim that emulates this functionality or virtualize the Linux kernel to support it. They chose the latter. There is no Docker involved.
This is a completely sane and smart thing for them to do. Given the choice I'd still much rather run Linux but this brings macOS a step closer to parity with such.
To be honest, I don't know what Docker or any of these things are. I just wanted to sound smart so I could fit in and people would like me.
Getting worried about WSL I see!
Whenever I have to develop on Windows, I clone my repos and run neovim / docker inside of WSL, for the improved performance (versus copying / mounting files from windows host) and linux. The dev experience is actually pretty good once you get there.
I'm not sure this is the same, though. This feels more like docker for desktop running on a lightweight vm like Colima. Am I wrong?
This is my same workflow even for C#
I'm excited to run Systemd on mac!
:-)
It isn't systemd:
> Containers achieve sub-second start times using an optimized Linux kernel config, minroot filesystem, and a lightweight init system, vminitd
https://github.com/apple/containerization/blob/main/vminitd
Wouldn't be surprised if this goes through the same process Windows users did with WSL. Starting out with no systemd, to community-developed systemd-in-a-bottle setups, to proper systemd integration
> I'm excited to run Systemd on mac!
OCI containers are supposed to be "one container, one PID": at the very least the container's server is PID1 (at times other processes may be spawned but typically the container's main process is going to be PID1).
Containerization is literally the antithesis of systemd.
So I don't understand your comment.
If they're going this way, why not just replace the macOS kernel (XNU) with Linux? They'll get so much more.
Because the rest of the system uses a bunch of things that have no drop-in Linux equivalent - SIP, Mach ports, firmlinks, etc.
Those can be emulated with the likes of SELinux, sockets, and bind mounts. It will take a lot of effort and some adaptation, but it could be done.
I'm glad this will kill the Docker Desktop clone business on Mac. Friend company got hit by using one of the free ones and got rug pulled by them.
I think this is purely a checkbox feature to compare against wsl. Otherwise apple just wouldn't get involved (not engineers, who would do lots of good things, but management that let this out)
Cool, but until someone (Apple or otherwise) implements Docker Compose on top of this, it's unlikely to see much use.
You only need to expose a docker daemon, which docker compose will use. The daemon is just a unix socket to a process that manages the containers, which is very likely a trivial change on top of the existing container codebase.
For instance, Orbstack implements the docker daemon socket protocol, so despite not being docker, it still allows using docker compose where containers are created inside of Orbstack.
Requires an Apple Silicon Mac to run.
> You need an Apple silicon Mac to build and run Containerization.
> To build the Containerization package, your system needs either:
> macOS 15 or newer and Xcode 26 Beta
> macOS 26 Beta 1 or newer
Those on Intel Macs, this is your last chance to switch to Apple Silicon, (Sequoia was the second last)[0] as macOS Tahoe is the last version to support Intel Macs.
[0] https://news.ycombinator.com/item?id=41560423
Also, there are some really amazing deals on used/refurb M2 Macs out there. ~$700 for a Macbook Air is a pretty great value, if you can live with 16GB of RAM and a okay but not amazing screen.
$450 for a M4 Mac mini (at Microcenter, but Best Buy will price match) is possibly the best computer hardware deal out there. It is an incredible machine.
12 replies →
Indeed. I just grabbed a mint M3 MBA on ebay for about $950 with a 1TB ssd (which tbh was my main need to upgrade this family member in the first place, as they weren't CPU-bound on the old M1). Wild deals to be had!
a 30% discount for a 3 yr old machine is good? A new one is $999.
1 reply →
Even better deals on M1s which aren't much slower than M2s
Any linux or bsd that has goodhardwawe support for intel mac?
For the older ones with Broadcom WiFi I was able to get stock Ubuntu working great by following this:
https://askubuntu.com/questions/55868/installing-broadcom-wi...
Not sure about the newer ones.
Gathering this information and putting together a distro to rescue old Macbooks from the e-waste bin would be a worthwhile project. As far as I can tell they're great hardware.
I imagine things get harder once you get into the USB-C era.
1 reply →
That was officially communicated at the state of the union session.