Was at Oculus post acquisition and can say that the whole XROS was an annoyance and distraction the core technology teams didn’t need. There were so many issues with multiple tech stacks that needed fixing first.
Mind you, this XROS idea came after Oculus reorged into FB proper. It felt to me like there were FB teams (or individuals) that wanted get on the ARVR train. Carmack was absolutely right, and after the reorg his influence slowly waned for the worse.
Just a small bunch of XROS people came from FB proper (mostly managers) because an average FB SWE has no required skills. Most folks were hired from the industry at E5/E6 and I think we had ever took one or two bootcampers that ultimately were not successful and quickly moved elsewhere in FB.
It looks to me Meta was a victim of ivory tower researchers who just want to experiment on their non-practical theoretical research on company's expense.
It has some value a huge company funds these research, as long as it doesn't affect the practical real for-profit projects.
John describes exactly what I'd like someone to build:
"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."
As a thought experiment:
* Pick a place where cost-of-living is $200/month
* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.
* Drop a load of computers with little to no software, and little to no internet
* Try reinventing the computing universe from scratch.
Love this idea and wondering where that low cost of living place would be. But genuinely asking;
What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?
I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU
uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .
And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.
> Love this idea and wondering where that low cost of living place would be
Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)
> What problem are we trying to solve that is not possible right now?
The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.
Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.
Sharing technology back-and-forth a century later would be amazing.
Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.
And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.
More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)
You can't predict the future, and having two independent futures seems like a great way to have progress.
Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.
> Do we start from hardware at the CPU ?
For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.
Continuing the thought experiment: There's an interesting sort-of contradiction in this desire: I, being dissatisfied with some aspect of the existing software solutions on the market, want to create an isolated monastic order of software engineers to ignore all existing solutions and build something that solves my problems; presumably, without any contact from me.
Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.
Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.
>maybe Microsoft should just stop using React for the Start menu, that might be a good start.
Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.
Not saying these are perfect, but consider reviewing the work of groups like the Internet Society or even IEEE sectors. Boots on the ground to some extent such as providing gear and training. Other efforts like One Laptop Per Child also leaned into this kind of thinking.
What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).
Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.
Pick a university, and given them $1B to never use Windows, MacOS, Android, Linux, or anything other than homebrew?
To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)
Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?
They will just face the same problems we solved decades ago and reinvent the mostly similar solution we also had decades ago.
In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.
I've written a lot of low level software, BSPs, and most of an OS, and the main reason to not write your own OS these days is silicon vendors. Back in the day, they would provide you a spec detailed enough that you could feasibly write your own drivers.
These days, you get a medium-level description and a Linux driver of questionable quality. Part of this is just laziness, but mostly this is a function of complexity. Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
Intel still does it. As far as I can see they're the only player in town that provide open, detailed documentation for their high-speed NICs [0]. You can actually write a driver for their 100Gb cards from scratch using their datasheet. Most other vendors would either (1) ignore you, (2) make you sign an NDA or (3) refer you to their poorly documented Linux/BSD driver.
Not sure what the situation is for other hardware like NVMe SSDs.
Wow... that PDF is 2,750 pages! There must be an army of technical writers behind it. That is an incredible technical achievement.
Real question: Why do you think Intel does this? Does it guarantee a very strong foothold into data center NICs? I am sure competitors would argue two different angles: (1) this PDF shares too much info; some should be hidden behind an NDA, (2) it's too hard to write (and maintain) this PDF.
The NVMe spec is freely downloadable and sufficient to write a driver with, if your OS already has PCIe support (which doesn't have open specifications). You don't need any vendor-specific features for ordinary everyday use, so it's a bit of a different situation from NICs. (Also, NVMe was in very large part an Intel creation, though it's maintained by an industry consortium.)
That's interesting that it's that short. I remember a long while ago I had aspirations of implementing a custom board for Prestonia-/Gallatin-era Xeons and the datasheets and specs for those was around 3000 pages, iirc. Supporting infra was about that long as well. So I'm surprised to see a modern ethernet controller fit into the same space. I appreciated all of the docs because it was so open, I felt like I could actually achieve that project, but other things took priority.
Yeah this. I tried to modify a hobby OS recently so it would process the "soft reboot" button (to speed up being rebooted in GCP) and it was so unbelievably hard to figure out how to support it. I tried following the instructions on the OS Dev Wiki and straight up reading what both Linux and FreeBSD do and still couldn't make progress. Yes. The thing that happens when you tell Windows or Linux to "restart". Gave up on this after spending days on it.
The people who develop OSes are cut from a different cloth and are not under the usual economic pressures.
I also think that they have access to more helpful resources than people outside the field do, e.g. being able to contact people working on the lower layers to get the missing info. These channels exist in the professional world, but they are hard to access.
Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
That's what's claimed. That's what people say, yet it's just an excuse. I've heard the same sort of excuse people have, after they write a massive codebase, then say "Oops, sorry, didn't get around to documenting it".
And no, hardware is not more difficult than software to document.
If the system is complex, there's more need to document, just as with a huge codebase. On their end, they have new employees to train up, and they have to manage testing. So any excuse that silicon vendors have to deal with such immense complexity? My violin plays for them.
> "Oops, sorry, didn't get around to documenting it".
That's obviously the wrong message. They should say "Go ask the engineering VP to get us off any other projects for another cycle while we're writing 'satisfying' documentation".
Extensive documentation comes at a price few companies are willing to pay (and that's not just a matter of resources. Look at Apple's documentation)
> If the system is complex, there's more need to document
It’s not first party documentation that’s the problem. The problem is that they don’t share that documentation, so in order to get documentation for an “unsupported” OS a 3rd party needs to reverse engineer it.
I find myself largely unable to document code as I write it. It all seems obvious at the time. It's when I go back to it later, and I re-figure it out, that the documentation then can be written.
My hunch is that for nearly anyone who is serious about it these days, the way forward is either to have unusually tight control over the underlying platform, or to include a servant Linux installation with your OS. If Windows is a buggy set of device drivers, then Linux is a free set of buggy device drivers. If you're happy with your OS running as a client of a Linux hypervisor indefinitely then you could go for that; otherwise you'd have to try to gradually move bits of the hardware support into your OS over time—ideally faster than new Linux dependencies arise...
At least for certain types of OSes, it should be relatively easy to get most of Linux's hardware support by porting LKL (https://github.com/lkl/linux) and adding appropriate hooks to access hardware.
Of course, your custom kernel will still have to have some of its own code to support core platform/chipset devices, but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
Also, it probably wouldn't work so well for typical monolithic kernels, but it should work decently on something that has user-mode driver support.
>but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
thus calling into question why you ever bothered writing a new kernel in the first place if you were just going to piggyback Linux's device drivers onto some userspace wrapper thingy.
Im not necessarily indoctrinated to the point where I can't conceive of Linux being suboptimal in a way which is so fundamental that it requires no less than a completely new OS from scratch but you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
Writing drivers is easy, getting vendors to write *correct* drivers is difficult. At work right now we are working with a Chinese OEM with a custom Wifi board with a chipset with firmware and drivers supplied by the vendor. It's actually not a new wifi chipset, they've used it in other products for years without issues. In conditions that are difficult to reproduce sometimes the chipset gets "stuck" and basically stops responding or doing any wifi things. This appears to be a firmware problem because unloading and reloading the kernel module doesn't fix the issue. We've supplied loads of pcap dumps to the vendor, but they're kind of useless to the vendor because (a) pcap can only capture what the kernel sees, not what the wifi chipset sees, (b) it's infeasible for the wifi chipset to log all its internal state and whatnot, and (c) even if this was all possible trying to debug the driver just from looking at gigabytes of low level protocol dumps would be impossible.
Realistically for the OEM to debug the issue they're going to need a way to reliably repro which we don't have for them, so we're kind of stuck.
This type of problem generalizes to the development of drivers and firmware for many complex pieces of modern hardware.
XROS had a completely new and rapidly evolving system call surface. No vendor would've been able to even start working on a driver for their device, let alone hand off a stable, complete result. It wasn't a case of "just rename a few symbols in a FreeBSD implementation and run a bunch of tests".
> Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
You know, one'd think that having a complex hardware should make writing a driver easier because the hardware is able to take care of itself just fine, and provide a reasonable interface, as opposed to devices of the yore which you had to babysit, wasting your main CPU's time, and doing silly stuff like sending them two identical initialization commands with 30 to 50 microseconds delay between or whatever.
No, the complexity usually isn't hidden. It's the driver's job to do that.
I guess one exception maybe is Nvidia who have sort of hidden the complexity by moving most driver functionality onto software on the card. At least that's how I understood it. Don't quote me on that.
heh, in mid-2000s all I had were a batch of misbehaving SATA controllers under freebsd, and an (actually quite well-written core of a) linux driver was all I had to work with.
Without that, we would have probably just switched hw, because the quite obscure bug was in the ASIC, and debugging that on 2005-6-ish hw is just infeasible.
The problem that is kind of glossed over here is that Meta hired a bunch of folks from Microsoft who were primarily interested in writing operating systems, and set them to work on XR - obviously they wanted to write a custom operating system
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
I've only seen John Carmack's public interactions, but they've all been professional and kind.
It's depressing to imagine HR getting involved because someone's feelings had been hurt by an objective discussion from a person like John Carmack.
I'm having flashbacks to the times in my career when coworkers tried to weaponize HR to push their agenda. Every effort was eventually dismissed by HR, but there is a chilling effect on everyone when you realize that someone at the company is trying to put your job at stake because they didn't like something you said. The next time around, the people targeted are much more hesitant to speak up.
I followed his posts internally before he left. He was strict about resource waste. Hand tracking would break constantly and he brought metrics to his posts. His whole point was that Apple has hardware nailed down and it’ll be efficient software that will be the differentiator. The bloat at Meta was the result of empire building.
I remember watching Carmack at a convention 15 years ago. He took a short sabbatical and came back with ID Tech 3 on an iPhone, and it still looks amazing well over a decade later.
This is a guy who figures that what he wants to do most with his 3 free weekends is to port his latest, greatest engine to a Cortex-A8. Leading corporate strategy? Maybe not. But Carmack on efficiency? Just do it.
I saw a few of those. He really leaned in on just how much waste was in the UI rendering, with some nasty looking call times to critical components. I think it was close to when he left.
Dude just seemed frustrated with the lack of attention to things that mattered.
But...that honestly tracks with Meta's past and present.
Carmack is a legend and I admire his work, but he seems to believe his own legend these days (like a few others big-ego gamedevs) and that can lead to arbitrary preferences being sold as gospel.
This is what got Lucovsky pushed out. He wanted to build OS from scratch and couldn't see past the technical argument and acknowledge the Product's team urgency to actually land something in the hands of customers. Meanwhile, he left a trail of toxicity that he doesn't even realize was there[0].
Interestingly, he was pulling the same bs at Google until reason prevailed and he got pushed out (but allowed to save face and claim he resigned willingly[1]).
I saw the same thing at Google. A distinguished engineer tried gently at first to get a Jr engineer to stop trying to do something that was a bad idea. They persisted so he told them very bluntly to stop. HR got involved.
I even found myself letting really bad things go by because it was just going to take way to much of my time to spoon feed people and get them to stop.
I have mixed feelings about this. In one part, JC is someone I look up to, at least from the perspective of engineering. On the other hand, putting myself in the shoes in someone who got the once in life chance to build a new OS with corp support for a new shiny device…I for hell would want to do this.
Look at the outcome of Meta's performance in AR/VR over the past few years: a fortune has been spent; relatively little has been achieved; the whole thing is likely about to be slashed back; VR, something Carmack believes in, remains a bit commercially marginal and easily dismissed; and Carmack's own reputation has taken a hit from association with it all. You can understand perfectly well why he doesn't feel that it would have been harmless to just let other people have whatever fun they wanted with the AR/VR Zuckbucks.
(Mind you, Carmack himself was responsible for Oculus' Scheme-based VRScript exploratory-programming environment, another Meta-funded passion project that didn't end up going far. It surely didn't cost remotely as much as XROS though.)
> If the platform really needs to watch every cycle that tightly, you aren't going to be a general purpose platform, and you might as well just make a monolithic C++ embedded application, rather than a whole new platform that is very likely to have a low shelf life as the hardware platform evolves.
Which I think is agreeable, up to a certain point, because I think it's potentially naive. That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
Exactly! It seems very narc-y. Just let me build my cool waste of company resources, it's not like Zucky is going to notice, he's too busy building his 11 homes.
Imagine being able to build an operating system, basically the end-game of being a programmer, and get PAID for it. Then some nerd tells on you.
I got the chance to do this at Microsoft, it is indeed awesome! Thankfully the (multiple!) legendary programmers on the team were all behind the effort.
Anyway, if anyone reading this gets a chance to build a custom OS for bespoke HW, and get paid FAANG salary to do so, go for it! :-D
meta was a weird place for a while. because of psc (the performance rating stuff) being so important… a public post could totally demoralize a team because if a legend like carmack thinks that your project is a waste of resources, how is that going to look on your performance review?
impact is facebook for “how useful is this to the company” and its an explicit axis of judgement.
How large is their headcount these days? And how many actually useful products have they launched in the last decade? You could probably go full Twitter and fire 90% of the people, and it would make no difference from a user perspective.
But... That's not an HR violation. If something a team is working on is a waste of resources, it's a waste. You can either realize that and pivot to something more useful (like an effort to take the improvements of the current OS project and apply them to existing OSes), or stubbornly insist on your value.
Why is complaining to HR even an option on the table?
Facebook has literally done very little in terms of new breakthrough products in a decade at least, and Bytedance has apparently just beat them on revenue.
Yeah, people getting really angry if you say anything bad about a product (!) is a depressing commonality in certain places these days.
I got angry emails from people because I wrote "replacing a primary page of UI with this feature I never use doesn't give me a lot of value" because statements like that make "the team feel bad". It was an internal beta test with purpose of finding issues before they go public.
Not surprisingly, once this culture holds root, the products start going down the drain too.
But who cares about good products in this great age of AI, right?
When I compare workplace dynamics in the American company I work for with local company a friend of mine works for, I feel like I sold my soul to the devil.
Masters of Doom does seems to want to, however accurately or not, set Carmack up as the antagonist of its story against Romero as the hero sometimes. I think that readers just largely didn't notice that since Carmack's heroic image was already so firmly established. In fact some of the early-ID stuff really does seem to raise some questions. (Was Tim Willits mostly Carmack's protégé, for instance?)
I’ve been on both the same side and the opposing side of debates with him, both in person and over internal discussion threads. His public persona and private behavior match. I viewed it positively, though per the topic of the thread, not everyone did.
If you're in high leadership, even just being pessimistic can be a massive morale killer. It doesn't mean that going to HR is the right call but I could see how someone would vent that way.
If you are senior leadership and you find that your org has some people do useless side projects for fun (and tons of money) what delivers no value, your job is to solve this problem by reassigning or firing them.
Facebook VR never needed a new OS in the first place. It needed actual VR.
Hehehe. I have talked to John Carmack a few times. He's super harsh and has zero filter or social niceties (Azperger's level, not that he is, but just sayin'). If you are not used to it or understand where it's coming from, it can be quite a shock. Or at least he was, many years ago. Maybe he's changed.
I can see that. Sadly, there are a lot of people in the world who simply don't know how to deal with people who can be direct, if not somewhat abrasive, in their communication style. Their intent can be noble, well-intentioned, and not meant to offend. They simply don't beat around the bush or worry about whether your fragile ego will be bruised when they make an observation.
I've had to coach people and help them understand the entitlement involved in demanding that everyone adjust and adhere to their personal preferences and communication style. In my experience, it's about seeking to understand the person and adapt accordingly. Not everyone is willing to do that.
Sorry but if you know his story, seen candid videos of him, or talked to the people around him, he's a Linus-level "I'll say what I want" type.
There weird hagiographies need to go. Carmack is absolutely not known to be kind. I have no idea what happened here but the idea that's he's this kindly old grandpa who could never, ever be rude or unprofessional is really out there.
And stupid. Like it or hate it, a non-nonsense, direct speaking, but fair and objective boss is the one you want. No one is served by failure; not the people at the top, nor the people at the bottom.
There is a difference between “this project is not going to work” vs “these people are incompetent and the project should be cancelled as a result”. The former needs to be said, the latter is a HR violation.
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
This is one of the reasons I’m sick of working pretty much anywhere anymore: I can’t be myself.
Appreciating people for their differences when they are humble and gifted is easy. I side with liberals, but I have a mix of liberal, moderate, and conservative friends.
But there are only so many years of pretending to appreciate all of the self-focused people that could be so much better at contributing to the world if they could quietly and selflessly work hard and respect people with different beliefs and backgrounds.
I’m happy for the opportunity I have to work, and I understand how millennials think and work. But working with boomers and/or gen X-ers would be so much less stressful. I could actually have real conversations with people.
I don’t think the problem is really with HR. I think the problem is a generation that was overly pandered to just doesn’t mix with the other generations, and maybe they shouldn’t.
I think the issue is, Carmack didn't talk like a "normal" facebook engineer.
Supposedly you were meant to have you disagreements in private, and come to support what ever was decided. "hold your opinions lightly" The latest version of it was something like "disagree and commit".
This meant that you got a shit tonne of group think.
This pissed off Carmack no end, because it meant shitty decisions were let out the door. He kept on banging on about "time to fun". This meant that any feature that got in the way of starting a game up as fast a possible, would get a public rebuke. (rightly so)
People would reply with "but the metric we are trying to move is x,y & z" which invariably would be some sub-team PSC (read promotion/bonus/not getting fired system) optimisation. Carmack would basically say that the update was bad, and they should feel bad. This didn't go down well, because up until 2024 one did not speak negatively about anything on workplace. (Once carmack reported a bug to do with head tracking[from what I recall] there was lots of backwards and forwards, with the conclusion that "won't fix, dont have enough resources". Carmack replied with a diff he'd made fixing the issue.)
Basically Carmack was all about the experience, and Facebook was all about shipping features. This meant that areas of "priority" would scale up staffing. Leaders distrusted games engineers("oh they don't pass our technical interviews"), so pulled in generalists with little to no experience of 3D.
This translated in small teams that produced passable features growing 10x in 6 months and then producing shit. But because they'd grown so much, they constantly re-orged pushed out the only 3d experts they had, they could then never deliver. But as it was a priority, they couldn’t back down
This happened to:
Horizons (the original roblox clone)
video conferencing in oculus
Horizons (the shared experience thing, as in all watching a live broadcast together)
Both those horizons (I can't remember what the original names were) Were merged into horizons world, along with the video conferencing for workplace
originally each team was like 10, by the time that I left, it was something like a thousand or more. With the original engineers either having left or moved on to something more productive.
tldr: Facebook didn't take to central direction setting, ie before we release product x, all its features must work, be integrated with each other, and have a obvious flow/narrative that links them together. Carmack wanted a good product, facebook just wanted to iterate shit out the door to see what stuck.
Mechanisms for getting the linux kernel out of the way is pretty decent these days, and CPUs with a lot of cores are common. That means you can isolate a bunch of cores and pin threads the way you want, and then use some kernel-bypass to access hardware directly. Communicate between cores using ring buffers.
This gives you best of both worlds - carefully designed system for the hardware with near optimal performance, and still with the ability to take advantage of the full linux kernel for management, monitoring, debugging, etc.
I was at Google when the Flutter team started building Fuchsia.
They had amazing talent. Seriously, some of the most brilliant engineers I've worked with.
They had a huge team. Hundreds of people.
It was so ambitious.
But it seemed like such a terrible idea from the start. Nobody was ever able to articulate who would ever use it.
Technically, it was brilliant. But there was no business plan.
If they wanted to build a new kernel that could replace Linux on Android and/or Chrome OS, that would have been worth exploring - it would have had at least a chance at success.
But no, they wanted to build a new OS from scratch, including not just the kernel but the UI libraries and window manager too, all from scratch.
That's why the only platform they were able to target was Google's Home Hub - one of the few Google products that had a UI but wasn't a complete platform (no third-party apps, for example). And even there, I don't think they had a compelling story for why their OS was worth the added complexity.
It boggles my mind that Fuchsia is still going on. They should have killed it years ago. It's so depressing that they did across-the-board layoffs, including taking away resources from critically underfunded teams, while leaving projects like Fuchsia around wasting time and effort on a worthless endeavor. Instead they just kept reducing Fuchsia while still keeping it going. For what?
Not only did they target Home Hub, they basically forced a rewrite on it (us, I worked on the team). After we already launched. And made our existing workable software stack into legacy. And then they were late. Then late again. And late again. With no consequences.
100% agree with your points. To me watching I was like -- yeah, hell, yeah, working on an OS from scratch sounds awesome, those guys have an awesome job. Too bad they're making everyone else's job suck.
By forced I guess you’re referring to the room full of leads who all said yes, but then reported otherwise back down to their ics to avoid retribution. I caught early wind of this from folks being super rude in early on the ground discussions and tried to raise it with Linus. One of the directors got his kickers in a twist and accused me of making a mountain out of a molehill. I guess clearly not, as the sentiment and division still stands.
It's a lot of work and hard to justify if you're looking for short term improvements. But if you're really committed to long term improvements, it absolutely makes sense. Google is actually willing to make long term investments. Publicly justifying the investment has never been a goal of the project which is why most folks probably don't understand it. Honestly I'm not sure why folks care enough to even do commentary on it. If you find it useful, you can participate, if not just ignore it.
Fwiw inventing a new application ecosystem has never been a goal and is therefore not a limitation for its viability. The hard part is just catching up to all the various technologies everyone takes for granted on typical systems. But it's not insurmountable.
I'm also not sold on the idea that having more options is ever a bad thing. People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
> People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
3 main OSes vs 2 main browser engine for consumer to choose from?
Anyway the main issue with the Browser engine consolidation is that whoever owns the Browser engine, can make or break what goes in there. Just think about VSCode's current status with all the AI companies wanting to use it and make it their own product, while MSFT attempting to curtail it. At some point either MSFT decide it commit to FOSS on this one, or the multiple forks will have to reimplement some functionalities.
I think the hope is that you just start there. They might have migrated the meeting room devices. Why would you set out to replace *everything* at once? Do something, get some revenue/experience, then try to fan out.
Wasn’t Fuchsia supposed to be a platform where different OS could run in a virtual environment and software packages would be complete containers? Was not this a new way of tackling the ancient OS problem?
These were my imaginations. I thought maybe an OS that could run on the web. Or an OS that could be virtualized to run on several machines. Or an OS that could be run along several other instances on the same machine each catering to a different user.
That doesn't sound anything like what fuchsia is or ever was. Fuchsia takes a different set of tradeoffs with respect to baseline primitives and built a new stack of low level user space on top of those new primitives. This gives the software fundamentally different properties which might be better or worse for your use case. For consumer hardware products I think it comes out ahead, but only time will tell.
And the crazy thing is there is arguably a lot more of a reason for Meta / Oculus to have had its own operating system because it is meant for a specific configuration of hardware and to utilize those hardware resources to a quite different goal than most other OSes out there. Even in that environment it was still a waste
I guess it's just a political shit show at this point. Ideas go hard if the people behind them aren't playing the game well enough, no matter their value.
My understanding is that people are working on Fuschia in name only at this point. Of course some people are passionate enough to try and keep it alive, but it’s only useful to the degree that it can help the Android team move faster.
Back in mmm like 2002 or 2003 or 2004 while at Microsoft I read an internal paper from a few OS guys who hackathoned something for Bill Gates's Think Week (which is when he used to go to some island in San Juans or somewhere similar and just read curated papers and think, it was a huge prestige to get such a paper to him) and that something was an OS written from scratch with GC and memory management on top of something very .NET framework'y (which was released a couple of years ago. They had it booting on all kinds of hardware and doing various neato things. One of explicitly called design principles was 0 compatibility with anything Windows before. Which is why it didn't go anywhere I assume. I remember it was just a handful of engineers (presumably OS folks) hacking for like a month. . It was awesome to read about.
Singularity was cool. I'm sad that it was abandoned. The concept of using software isolation instead of hardware memory protection was really interesting.
I am very certain in my recollection that this was started much earlier than this as hackathon skunkworks before something like this happened at MSR. It didn't do anything beyond kernel and command line, there was no browser. I don't know if those two shared roots either. Anyhow, but yeah, still both were intellectual feats!
It matters who you communicate concerns to. Something as fundamental as "I think that your team shouldn't even exist" should go to the team leads and their managers exclusively at first. Writing that to the entire affected team is counterproductive in any organization because it unnecessarily raises anxiety and reduces team productivity and focus. Comments like this from influential people can have big mental and physical health impacts on people.
This entire situation looks very suspicious. Was Carmack even responsible for triaging research projects and allocating resources for them? If yes, then he should have fought that battle earlier. If no, then the best he could do is to refuse to use that OS in projects he controls.
Maybe on a mediocre team. But that was the parent comment's point.
On well-functioning teams, product feedback shouldn't have to be filtered through layers of management. In fact, it would be dishonest to discuss something like this with managers while hiding it from the rest of the team.
> Comments like this from influential people can have big mental and physical health impacts on people.
So what are we supposed to do? Just let waste continue? The entire point of engineering is to understand the tradeoffs of each decision and to be able to communicate them to others...
I'm sure that kind of crap helped nudge JC out of there. He mentions (accurate and relevant) reasons why something is probably a bad idea, and the person in charge of doing it complains that JC brought up the critiques, rather than addressing the critiques themselves. What a pathetic, whiny thing to do.
You've got to remember that context is critical with stuff like this.
There's nothing wrong with well-founded and thoughtful criticism. On the other hand, it is very easy for this to turn into personal attacks or bullying - even if it wasn't intended to be.
If you're not careful you'll end up with juniors copying the style and phrasing of less-carefully-worded messages of their tech demigod, and you end up with a huge hostile workplace behaviour cesspit.
It's the same reason why Linus Torvalds took a break to reflect on his communication style: no matter how strongly you feel about a topic, you can't let your emotions end up harming the community.
So yes, I can totally see poorly-worded critiques leading to HR complaints. Having to think twice about the impact of the things you write is an essential part of being at a high level in a company, you simply can't afford to be careless anymore.
It's of course impossible to conclude that this is what happened in this specific case without further details, but it definitely wouldn't be the first time something like this happened with a tech legend.
What would be the real advantage of a custom OS over a Linux distribution?
The OS does process scheduling, program management, etc. Ok, you don’t want a VR headset to run certain things slowly or crash. But some Linux distributions are battle-tested and stable, and fast, so can’t you write ordinary programs that are fast and reliable (e.g. the camera movement and passthrough use RTLinux and have a failsafe that has been formally verified or extensively tested) and that’s enough?
I think the proper comparison point here is probably what game consoles have done since the Xbox 360, which is basically run a hypervisor on the metal with the app/game and management planes in separate VMs. That gives the game a bare metal-ish experience and doesn't throw away resources on true multitasking where it isn't really needed. At the same time it still lets the console run a dashboard plus background tasks like downloading and so on.
For this use case a major one would be better models for carved up shared memory with safe/secure mappings in and out of specialized hardware like the gpu. Android uses binder for this and there are a good number of practical pains with it being shoved into that shape. Some other teams at Google doing similar stuff at least briefly had a path with another kernel module to expose a lot more and it apparently enabled them to fix a lot of problems with contention and so on. So it’s possible to solve this kind of stuff, just painful to be missing the primitives.
Based on the latter tweet in the chain, I'm wondering if Carmack is hinting that Foveated Rendering (more processing power is diverted towards the specific part of the screen you're looking at) was one advantage envisioned for it. But perhaps he's saying that he's not so sure if the performance gains from it actually justify building a custom OS instead of just overclocking the GPU along with an existing OS?
Wouldn't that be an application (or at most system library) concern though? The OS is just there to sling pixels, it wouldn't have any idea whether those pixels are blurry… well for VR it would all be OpenGL or equivalent so the OS just did hardware access permissions.
Maybe not applicable for the XR platform here, but you could add introspection capabilities not present in Linux, a la Genera letting the developer hotpatch driver-level code, or get all processes running on a shared address space which lets processes pass pointers around instead of the Unix model of serializing/deserializing data for communication (http://metamodular.com/Common-Lisp/lispos.html)
I stated this elsewhere, but at least six years ago a major justification was a better security model. At least that’s what Michael Abrash told me when I asked.
Everyone wants to make an OS because that's super cool and technical and hard. I mean, that's just resume gold.
Using Linux is boring and easy. Yawwwwn. But nobody makes an OS from scratch, only crazy greybeard developers do that!
The problem is, you're not crazy greybeard developers working out of your basement for the advancement of humanity. No. Youre paid employees of a mega corporation. You have no principles, no vision. You're not Linus Trovalds.
My understanding is that this is a key tenant of visionOS’s design, where apps don’t get access to gaze data (I think unless they’re taking over the full screen?)
The only reason Chinese companies can even get away with these big projects is because of state backing and state objectives. By itself, the market doesn't support a new general-purpose OS at this point.
Besides, the statement's completely nonsensical - there were multiple OSes developed by for-profit corporations in the West (Microsoft, Apple, Nintendo, QNX, Be, etc.).
It's kind of an extraordinary statement that an OS couldn't be developed by a for-profit organization, especially if the hardware's somewhat fixed and you don't need to support every piece of equipment under the sun.
Actually the “market” won’t prioritize anything that won’t give returns as soon as possible (except for the weird situation of VC money being poured in).
Geopolitical reasons for making your own OS are actually reasonable and understandable. Not saying they are good, because I would much prefer a planet where we collaborate on these things… but they’re not dumb. They make sense in a similar way the space race made sense.
> I wish I could drop (so many of) my old internal posts publicly, since I don’t really have the incentive to relitigate the arguments today – they were carefully considered and prescient. They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad, but I expect many of them would acknowledge in hindsight that the Meta products would not be in a better place today if the new OS effort had been rammed into them.
So someone at Meta was so sensitive that being told their behemoth of a project was ill advised ended up getting reported to HR?
I was there when they wanted to do the custom XROS. I remember asking them in a Q&A session exactly why they would build this and I recall the reasoning behind it totally fell flat. Fundamentally it became clear these guys just wanted to write a new OS because they thought it would be cool or fun.
Much of the scenarios they tried to address could have been done with Mach or some realtime kernel or with fuchsia. I recall later on they did consider using fuchsia as the user space for the os for some time.
On another note, there was similarly an equally “just for fun” language effort in the org as well (eg “FNL”). That was also conceived by a bunch of randos who weren’t a bunch of compiler guys that had no product vision and just did it for fun.
Well when the era of efficiency arrived all of this stuff ended.
Late 2019 I had a short conversation with Abrash about a new OS for the next set of glasses and my immediate reaction was “why?” He was adamant that there was a security need which Linux could not fill (his big concern was too much surface area for exploits in the context of untrusted 3rd party code). I remember thinking that this would be a surprise to cloud engineers at the big hosters, but chose not to continue the argument. He didn’t get where he is by being dramatically wrong very often, after all, but it still struck me as a waste. Note I did not work at Meta so he may have had stronger justifications he chose not to expose.
I worked on a completely different hardware project within meta and while they didn't want a custom OS, they used an off the shelf rtos with the intention of modifying it and it was a shit show. They had a million justifications for why they needed it, but they had no performance tests or metrics to verify to actually justify it. They incurred a huge development overhead for no verifiable performance improvements.
None of the code they wrote couldn't have just been written as a kernel module in Linux. It would've also been so much easier due to all the documentation and general knowledge people have about Linux both within the company and outside the company.
You could write a book on why it's practically impossible to create a new OS these days. Love Carmack for stating it so clearly. I also love that called out TempleOS, I also have a weird respect for it. Plan 9 is the probably the best example of a totally new OS and I hope someday it becomes viable because it's really a joy to use.
But ultimately it just makes sense to adapt existing kernels / OS (say, arch) and adapt it to your needs. It can be hair wrenchingly frustrating, and requires the company to be willing to upstream changes and it still takes years, but the alternative is decades, because what sounds good and well designed on paper just melts when it hits the real world, and linux has already gone through those decades of pain.
Android built a new, giant moat for Linux (or "Linux" depending on your opinions about Android) in the embedded application processor space - now the "standard" board support package target for new embedded AP hardware is almost always some random point-in-time snapshot of Android. Running "mainline" Linux is hard (because the GPU and media peripheral drivers are usually half-userspace Android flavored stuff and rely on ION and other Androidisms) and bare-metal is even worse (where previously, you'd get register-level documentation, now, you get some Android libXYZ.so library).
Writing TempleOS software taught me lower-level programming! The OS is weird and idiosyncratic, but much more polished and logical than you'd expect from seeing videos of its author.
I think people have forgotten about Google Fuchsia which I guess is a good sign for a new OS. They’ve done quite well in deploying it seamlessly to their consumer devices.
"Quite well" by what metric? It shipped on one device. That's pretty much the lowest bar you can imagine! Did it provide any tangible benefit to anyone? Let alone a benefit commensurate with the enormous cost of developing it and continuing to maintain it?
I think it was insane to start a new OS effort written in C/C++. We have plenty of OSes written in C/C++! We know how that story ends. If you're going to spend the effort, at least try a new language that could enable a better security model.
While I agree with the sentiment given my bias towards safe systems languages, Genode OS is pretty much mostly C++, although they added some Ada/SPARK as well, which is relatively recent research OS.
Fun rumor: Google shut down the AR effort and transferred the team to project Fuchsia as a way to retain highly skilled employees. So essentially they didn’t have any real technical needs for a new OS.
Isn't that somewhat debatable? Originally they were aiming at much more (chromebook OS for example) but seems like they settled for Google Home only as their scope.
Still a very interesting project, but that feels like a similar story, for limited use cases (a smart thermostat/speaker with specific hardware) it works, but for wider use cases with heterogeneus hardware and complex interfaces (actual screen, peripherals) it didn't work.
In that case, why wouldn't they "just" fork Linux? Or 10-years-ago-Linux?
The technical justification for Meta writing their own OS is that they'd get to make design decisions that suited them at a very deep level, not that they could do the work equivalent of contributing a few drivers to an existing choice.
> I wish I could drop (so many of) my old internal posts publicly, since I don’t really have the incentive to relitigate the arguments today – they were carefully considered and prescient. They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
Carmack being Carmack, I'm sure the HR report came to nothing but it's just another reminder of the annoyances I don't miss about working at a BigCo. In the end, it doesn't matter that it went nowhere, that he was right or that it was an over-reaction and likely a defensive move in inter-group politics Carmack wasn't even playing - it just slowly saps your emotional energy to care about doing the right things in the right ways.
My first month at Amazon someone reported me for laughing at them…I didn’t even know they existed, on the other end of the open floor. I was laughing at something completely unrelated.
That made me really think about how fragile and toxic people can be.
Another Amazonian almost got fired for reacting with a monkey covering eyes emoji to a post shared by a black person (no malintent, of course, just an innocent “mistake” most normal people wouldn’t even think twice about).
Jonathan Blow is the world’s most successful hobbyist programmer. His whole thing is doing projects from scratch. Every game he made could be done in Unity with far less effort.
Most opinions of this man exists in a vacuum space isolated from the real world software industry. Building an OS from scratch is one of those examples.
It’s never seems like there’s a significant reason behind them other than………”I made dat :P”
As an outsider...his games just look and feel different. They feel like bones-deep art, in a way that even the best of the best games (say, Hades) don't. Since Blow's games are puzzle games they're not even my favorite games! But the effort spent on making them exactly the way he wants them pays off.
It is genuinely ridiculous to say that the witness could “have been made in Unity with far less effort”. It’s easy to forget that people on this and ever forum love to just say stuff for the sake of having said something until you encounter a topic with which you are extremely familiar.
I don't think unity was as polished when braid came out in ~2008 that can also easily rewind time on low end Xbox hardware. The witness maybe in unreal? But there are some wild things there I've never seen an unreal game do that the witness does do
He got the right to be acknowledged by his peers for the work he has made at GDC, and anyone can make games with Unity, just like everyone can make a novel with Word, now making one without pre made tooling, that is a skill on itself.
Why is such a meme among gamers about Unity and Unreal based games?
Exactly because so many make so little effort it is clear where the game is coming from.
Is the difficulty in theoretical complexity of operating systems, or in project scoping/scope creep?
It's probably not that hard to write bare metal code for a modern CPU that runs and crashes. It's obviously insurmountably hard to compete with Android in features with scratch built bare metal code. An "OS" can be anything between the two. But it's very easy to imagine an "XR OS" project snowballing quickly into the latter, and Carmack's concerns would be spot on(as always is, and as proven). Is it then an inherent difficulty in "designing a new operating system", or is it technically something else?
People are nailing it here -- it's not the "OS" per se (heck, look at CP/M or original unix and this gives you a floor) -- it's the drivers and the required Standard Pieces: DMA, memory protection, TCP/IP, BLE, WiFi, Ethernet, GPU, USB (+ vast USB drivers), etc. Lotta 'standard' software must work on any new OS + drivers for all that hardware that is always changing. Great 'fun' .... Now, I do think Minix had a chance for a very short window, but lack of resources made sure it's now only an OS history footnote. All because a microkernel sucks 3% performance away from monolithic kernels. We are confused as an Industry... Performance uber alles -- that has just GOT to get fixed. Security; maintainability; simplicity > performance; or rather, those are worth some performance degradation.
> To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers.
I mean, I'd give a fair shake to an OS from the SQLite team [1].
I'd love a truly new OS, but I just don’t know what it would look like at this point? "New OS" ideas tend to converge on the same trunk.
Building a hobby OS taught me how little is just "software". The CPU sets the rules. Page tables exist because the MMU says so. Syscalls are privilege flips. Task switches are register loads and TLB churn. Drivers are interrupt choreography. The OS to me is just policy wrapped around fixed machinery.
I think any OS can be divided into a "backend" that deals with the hardware and a "frontend" user-level applications with a UI. The backend is mostly similar everywhere, while the frontend is what the general public typically perceives as the "OS".
It's hard to see anything truly new in the "invisible" backend, but the frontend changes with every update (Windows, Mac, Linux etc).
ACPU OS is a good example of this, where the backend can be a different OS, an emulator or actual hardware, while the frontend remains the same across all execution environments.
https://www.acpul.org/blog/so-fast
The XROS thing sounds sort of like the PenPoint OS -- which was used with the EO 440 and EO 880 tablet + Cellphone-connected computers that came out around the same time as Newton (early 90s) - but with larger screens and cellular voice/data/fax connectivity (optional). Their tagline was "The Pen is the Point". Besides having a WACOM tablet as the pen-input device (requiring a driver), and baking in the notion that (at that time was true): connectivity was sporadic, and therefore you had to be opportunistic when you got a reliable cell signal (or were plugged into a phone jack). Those two ideas sure as heck did not require a whole new OS to support. But PenPoint built a company to market said OS. https://en.wikipedia.org/wiki/PenPoint_OS?useskin=vector Interestingly, this company ended up being folded into EO itself, as there seemed to be no market for a pen-based OS.
There is another doomed project that XROS reminds me of: the Apple "Pink" OS. Brief history: https://lowendmac.com/2014/pink-apples-first-stab-at-a-moder... "Pink was spun out as Taligent. The kernel was jettisoned. Taligent would run on top of an operating system and act as an object oriented system (like OpenStep). It was released in 1995, but it sold poorly. It was canceled altogether in 1998." https://en.wikipedia.org/wiki/Taligent?useskin=vector more history http://www.roughlydrafted.com/RD/Q4.06/36A61A87-064B-470D-88... After Apple tried Pink, Taligent, and Copeland they .... ended up using Mach / FreeBSD and some pieces from other BSDs (as I understand it). Today, we have Windows and Unix of some flavor in the main. I think Geordi LaForge was using one of these OSs on his Warp Drive computers...
Meta has the talent and the balance sheet to pull this off. Worst case scenario we end up with one more open sourced operating system. Who knows what happens 20 years down the line.
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
Sigh... Usual company politics.
No matter how much money you pour with top talents, code quality, documents etc, developing a custom OS doesn't make sense.
Been there, seen that. I faced a similar situation at one company. They failed on custom Not-Invented-Here syndrome derived implementation. My technically correct skepticism was criticized for decreasing moral of the team working on it.
I love this part: "To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers"
I've been developing a solo ACPU OS for many years now, one that's fast and simple enough to be better than any known OS. That's why I believe all OS development problems come from overengineering and overcapitalization. https://www.acpul.org/blog/so-fast
This is completely right from a product point of view, which is Carmack's argument.
But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative. Yes, there wouldn't be any ROI for years, and yes, the first several products on the platform would probably be better off on something more traditional.
But the long term value could potentially be astronomical.
Just another case of quarterly-report-driven decision making, I suppose. Sigh.
Historically? The internet, the concept of a graphical user interface, the mouse, the smartphone, the LCD display, the laser printer...
It's about clever people trying weird stuff, and occasionally ending up with a world-changing idea. Asking for examples of to-be-discovered innovations is, by definition, an impossibility.
If you're competing against nothing, then I see it: it opens up a wide variety of product possibilities. But linux exists. Why not spend 1/1000th the time to adapt linux?
That's not even counting the rather substantial risk that your new OS will never approach the capabilities of linux, and may very well never become generally usable at all.
Option A: spend years and millions on a project that may never be as good as existing solutions, diverting attention and resources from actual products, or...
Option B: work on products now, using an existing, high-quality, extensible, adaptable OS, with very friendly licensing terms, for which numerous experts exist, with a proven track record of maintenance, a working plan for sustainability, a large & healthy developer community exists, etc.
It's hard to imagine how it wouldn't be a complete waste of time.
Apple bought one of those in the 90s, and they are still reaping the benefits of that strategic initiative. But the thing is, NeXt allowed Apple to think up new, differentiated products. If you come at the problem of the OS from a purely technical perspective, you'll waste time for no gain.
This is what Google has been trying to do with Fuchsia and the fact is that you can't escape the product point of view because the products exist, already have an OS stack, and get pretty defensive when another team tells them they're going to replace their OS, or their core if the product team is Android or Chrome OS.
But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative.
They have; Taligent comes to mind. You may not have heard of that -- or more likely, you have but just forgot about it -- but it's a good object lesson (no pun intended) in why a successful new OS is hard to just conjure into existence. There has to be a crying, desperate need for it, not just a vague sense that This Time We'll Get It Right.
You could probably cite OS/2 Warp as a less-obscure example of the same phenomenon.
While I appreciate Carmack and all, I'd love to hear from someone like Dave Cutler who's been over that bridge at least a couple of times successfully about if and what he'd do if he had resources to create whatever the hell he wants.
Another example of a new OS developed by a vendor is DryOS by Canon [0] as a replacement for WindRiver VxWorks[1]. It has been extensively explore by the chdk community of custom software extensions for Canon cameras. It appears to have some compatibility with Linux in some form.
In my non-expert mind, an OS for "foveated rendering" would be similar to what many cameras prioritize and more likely be similar to an "realtime OS" of some sort. OTOH, Apple's goggles use the XNU kernel, so maybe a microkernel would be sufficiently realtime, similar to QNX often used for automotive applications [4].
But what should you be running on an XR headset? The OS has to be real time.
Linux can sort of do that. Probably a stripped down Linux. About 90% of Linux is irrelevant or undesirable in this application.
Unless you’re designing the silicon yourself, stripping user space from Linux is several orders of magnitude easier than writing new device drivers for your brand new OS.
> To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers. Which was sort of Plan 9…
And yet, Sony did it, Nintendo did it, both have been pretty succeesful.
We also need to be clear what an OS is. Is it "darwin" or "macOS" - they have different scopes.
Things I'd want from an OS for an XR device.
1. Fast boot. I don't want to have to wait 2-3-4-5 minutes to reboot for those times I need to reboot.
I feel like Nintendo figured this out? It updates the OS in the background somehow and reboot is nearly instant.
2. Zero jank. I'm on XR, if the OS janks in any way people will get sick AND perceive the product as sucking. At least I do. iOS is smooth, Androind is jank AF.
Do any of the existing OSes provide this? Sure, maybe take an existing OS an modify it, assuming you can.
Nintendo is an interesting example though. According to Wikipedia they actually use a proprietary microkernel, which, if I'm reading this right, I think they developed themselves. Looks like the only open source components they have is some networking code which is published under the BSD license.
Sony and Nintendo both forked off of either NetBSD or FreeBSD. Sony's cameras at least up until the A7S2 run Linux (there's jailbreaks for these), although I never found any kernel / bootloader dump like it would be required.
Android suffers from being Java at the core, with all the baggage that brings with it.
Sony cameras all run Linux. Models with "PlayMemories Camera Apps" feature like A7M2 support runs Android userland on top. It's probably easier to count the cameras that don't(like old Olympus).
I like that the top reply to Carmack's wall of text is a screenshot of TempleOS with a doodle of an elephant lmao. And ironically, that meme reply is on topic and it says a thousand words with just one photo.
Another point I would add in support of that meme comment, is Google's recent rug-pull of Android not allowing sideloading apps from unsigned developers anymore starting this autumn, after over a decade of conquering the market with their "go with us, we're the open alternative to iOS" marketing.
The conclusion is to just never EVER trust big-tech/VC/PE companies, even when they do nice things, since they're 100% just playing the long game, getting buddy-buddy with you waiting till they smothered the competition with their warchest, and then the inevitable rug-pull comes once you're tied to their ecosystem and you have nowhere else to go.
Avoid these scumbags, go FOSS form the start, go TempleOS. /s but not really
I'm not sure Carmack's point disagrees with you. Meta is still big tech, and if your goal is to monetize at scale, rolling out your own isn't the most efficient way to do it. I don't think he'd discourage you rolling out your own OS if it's your hobby FOSS project.
In other words, unless God has specifically called upon you to build an OS, and maybe provided divine inspiration and assistance, you should avoid doing that. Seems to support Carmack's point!
Unless its for love or devotion, there's no compelling reason to create a new OS in 2025. Certainly that could change in the future but I think his observation (if I understand him correctly) is correct.
Just google/youtube the history of temple OS and its creator. It's fun, sad and tragic at the same time.
Spoiler alert: a single person coded it in his own programming language, but the person suffered from severe mental illnesses and ended up taking their own life.
The problem with this guy is that it’s hard to criticize him, whether at work or in this forum. For example, I am going to be downvoted for mocking the fact that this guy thinks it’s some genius move to say “No” to making an operating system, whatever making an operating system means.
The fact that Facebook, a company far richer than Bell labs ever was, like all FAANG, has a culture of expensive employees baby sitting broken software products (rather than lab researcher vs field technician separation), and cannot be bothered to do the long term investment in a new OS, is why I think the industry actually doesn't deserve these R&D tax breaks HN was bemoaning had gone away until this year.
The point of R&D is the time horizon is long, and the uncertainty is high. Making JS slop that then has to be constantly babysat is opex, not capex.
The problem when working for Meta is that if you do a good job, you've helped make the world worse... so the real heroes are the people wasting money and reducing efficiency
If you're at all competent, go work somewhere else
One of the better "service to humanity" opportunities for software engineers is to join a company like Meta or TikTok and perform awfully for as long as you can.
Yeah I'm making the world a better place by earning 500k a year doing a bad job to slow down this company. Look at how much good I am doing sorry I can't hear you over my paycheck clearing
The best way to serve humanity in your professional life is serve humanity in your professional life.
In other words, be useful. You don't have to worry about "being good" or "doing good" though many do and it's quite admirable to do so. But that's not the bar you have to clear.
The bar you should try to clear is to be useful. If what you're doing all day is helping people have shelter, or raise families, or be more healthy, or have more knowledge, or even be entertained or amused, you're being useful to people.
If what you do all day ultimately serves to make people poorer, more divided, more addicted, and more unwell, then what you're doing is not useful, it's harmful.
If what you're doing all day primarily contributes, even indirectly, to making people's lives worse, then nothing you do after that will erase it. Arguments to the contrary are just rationalization.
I think a better service to humanity is to excel at your job even if you end up at a socially corrosive org like Meta or Tiktok but donate a decent chunk of your paycheck to effective altruist charities that save lives.
Whatever you think of Meta core products, they pay a ton of people to work on various open source projects, do R&D on things which are only tangentially related to social media like VR or data center tech.
There is worse way to get a paycheck to do what you are interested in.
zstd can't really be attributed to Facebook. Yann Collet started work on it before joining Facebook, so it was kind of imported.
I am sure it made developing and standardizing the algorithm easier, but what makes it such a good (performant) algorithm is the design of the original creator.
But isn't that true for every big corp, or even every public company? Even if founders may had some other goals in addition to making money, as the time passes profit becomes the only goal, and usually more profit is being generated while doing bad and malicious things.
There are lots of profit motivated big companies that cause much less collateral damage. Facebook ranges from individualised harm like showing kids makeup ads when they delete a selfie, to macro scale harm like election interference
You could take a job designing landmines and you'd have a real hard time causing as much actual harm, as there just aren't enough wars going on to reach the same scale
Nokia (mostly networking-related things nowadays) touts - or at least used to, haven't kept up to date - itself as one of the most ethical companies around.
> But isn't that true for every big corp, or even every public company?
So I suppose not really, no.
Additionally companies working on carbon-free energy might also serve as evidence. There are some big ones around.
I think I can say that this wasn't the case with Sun Microsystems. I never worked there but everything I read on tht company was positive. I gate the fact that Oracle (one of the worst) bought them.
Depending on the founder. With Apple it can be reasoned that it only went down after you know who passed away.
Yeah its not reliable to count on one charismatic leader to run the whole thing, but that is what the corporate model has being doing and how we ended up here.
Facebook seems to have strange relationship with most Americans while the rest of the world is quite happy with it. Including both WhatsApp and Instagram.
I find this kind of comment revolting - if I owe something, I owe it to my family and my parents, so if Meta comes to make me an offer and I accept it, it's my business and no one else's. Strangers on the internet, instead of judging people based on the company they work for, and divide them into "good" and "bad", should get off their high horses and join these companies, if they are capable, and change them from the inside if they think they are doing bad things.
Didn't even realize this was a thread and not a single tweet until you posted this link. Guess that's the downsides of not having a Twitter acct anymore.
The Twitter algorithm is open source, unlike the algorithm for Facebook, Instagram, TikTok etc. I'm not aware of any evidence for bias in the algorithm.
I’ve seen this firsthand. These giant tech companies try to just jump into a massive new project thinking that because they have built such an impressive website and have so much experience at scale they should just be able to handle building a new OS.
In my case it wasn’t even a new OS it was just building around an existing platform and even that was massively problematic.
The companies that build them from scratch have had it as one of their core competencies pretty much from the start.
They have contributors to the linux kernel.
Pretty sure all the big tech companies have the right people to create a new OS that is better than Linux, the hard part is getting that new OS to be adopted.
He's acting like their VR UX is top notch when it's as bad as it gets. Just yesterday I dusted off my Meta Quest 2 to play a bit, and spent around an hour trying to pair up my left controller to the helmet after replacing the battery.
You can't do it without going through their fucking app, that asks for every permissions under the sun, including GPS positioning for some reason. After finally getting this app working and pairing it with my headset, I could finally realize the controller was just dead and their was nothing to do.
You can pair the controllers in the settings you don't need an app. Their VR UX does suck that is true, and horizon worlds is such a collosal failure that I'm surprised they haven't cancelled that entirely yet. But carmack also stated the technical issues numerous times.
> You can't do it without going through their fucking app, that asks for every permissions under the sun, including GPS positioning for some reason.
If it uses bluetooth, which it might for the controller?, the permission for bluetooth on Android is fine location --- the same permission as for using GPS. That might be the same permission you need for wifi stuff, too? Because products and services exist to turn bluetooth and wifi mac addresses seen into a fine location.
But who knows what they do with the GPS signal after they ask for it?
No, it doesn't use Bluetooth. Or maybe it does under the hood but the permissions they ask for are GPS and "see nearby devices". You are able to pair your device with Bluetooth disabled in the phone's quick menu.
If a professional can't give critical feedback in a professional setting without being rude or belittling others, then they need to improve their communication skills.
This is not that though. This is just developers being unable to handle constructive criticism, and when they can't win the argument on merits, went for the HR option. It happens.
I've had it happen to me too, but my response was to resign on the spot (I was already not satisfied with the company).
The "toxic behaviour" I had done? I reverted a commit on the master branch that didn't compile, and sent a slack to the Dev who had committed it saying "hi! There appears to have been a mistake in your latest commit, could you please check it out and fix it? I've reverted it in the meantime since I need to deploy this other feature"
The dev responded by force pushing the code that did not compile to master and contacted HR.
I decided there was greener grass on other pastures. I was right.
Having worked in the valley, I've seen what critical feedback meant in many companies there, and it removes all usefulness of the info because there is a ceiling of what is socially acceptable to say; therefore, you can't know how bad or urgent things are.
Everything is ASAP. They are super excited about everything. And nothing you do is wrong, it just could be improved or they like it but don't love it.
You don't know if something is important, basically.
Just like Louis CK said, "if you used 'amazing' on chicken nuggets, what are you going to say when your first child is born?". But in reverse.
Personally, I'd rather work with someone who would tell me my work is terrible if it is.
In Germany, you can't even legally say somebody did a bad job at your company in a recommendation letter. Companies created a whole subtext to workaround that, it's crazy.
Some things are just bad. You should be able to say it is. Not by saying it could be better. Not by using euphemism. It's just something that needs to go to the trash.
In fact, I don't trust people who can't receive this information, even if not packaged with tact (which you should attempt to, but life happens). If you can't handle people not being perfectly polite every time, I can't help but feel I won't be able to count on you when things get hard.
Being "reported to HR" doesn't mean "almost got fired". It likely meant a meeting where someone explained "hey, the way you communicated that caused some upset, let's discuss better ways to handle that situation next time." Very often in larger companies, complaints about things like "this bigwig from this other group jumped all over us" are automatically sent through HR because HR has staff whose job just is resolving conflicts between people and keeping things peaceful.
From what you know of Carmack, does "can't give critical feedback in a professional setting without being rude or belittling others" sound like him to you? It does not to me, though granted maybe he's different in his non public persona than what you can see in presentations and talks.
You've concluded this from a single, brief, throwaway line? Any madness you perceive about this situation has been fabricated by you, based on the details we have.
People have been getting mad at being made to feel bad at work for much longer than “safe space culture” has existed. If someone or some team had more power than you at an organization you for sure will get reprimanded for making them feel bad.
Reading between the lines, it sounds like he got reported for giving a lot of what might kindly be described as unsolicited advice. The guy left Meta ages ago, but he apparently still can't let this one go.
If you're in the middle of trying to write a new operating system, then it's probably not helpful to have John Carmack standing over you repeatedly telling you that you shouldn't be doing it. In this case Carmack gets the last laugh. Then again, it is easy to get the last laugh by predicting that a project will fail, given that most projects do.
When a veteran tells you something and is passionate about it, maybe it is worth listening or at least dealing with internally. At the end, he left anyway even if the project didn't fail and Meta remains wealthy but largely mediocre in terms of the products it delivers while relying heavily on startup acquisition and large spending. Pretty sure most people who work there only do so for premium rent-seeking.
None of it surprising if this is a signal of how they operate.
> If you're in the middle of trying to write a new operating system, then it's probably not helpful to have John Carmack standing over you repeatedly telling you that you shouldn't be doing it. In this case Carmack gets the last laugh. Then again, it is easy to get the last laugh by predicting that a project will fail, given that most projects do.
I mean, if you're working on a project that is likely to fail, wouldn't it be nice if someone gave you cover to stop working on it, and then you could figure out something else to do that might not fail? Can't get any impact if your OS will never ship.
Sometimes you have to let people fail, even though you can see it coming. It sounds like Carmack was sticking his nose in a project that wasn’t under his purview and he dug his heels in a bit too much when he should have just let it fail.
All the FAANG do dumb shit all the time and waste huge sums of money, if you work at a FAANG the best thing you can do is stay in your lane and don’t do dumb shit — eventually it will shake out.
I have been bullied around by L7s (as a L5) sticking their nose in things, and the best thing you can do is clearly articulate what you are doing and why, and that you understand their feedback. Turns out the L7 got canned — partially due to their bullying — and I got promoted for executing and being a supportive teammate, so things worked out in the end.
It got mentioned for a reason. And obviously escalating with HR is a big deal as it comes with career risks for the person you are reporting. Risking someone else's career should be a last resort but seems to be more commonly a knee-jerk reaction with HR becoming weaponised.
The drawback of this is you lose good talent and keep rent-seekers instead.
The only reason you want me to "cool off", is because you feel bad just interacting with somebody expressing a polite, strong opinion. Online. On the other side of the world. With text.
Something tells me that if we heard the other side of the story it might hit different. There's a lot of wiggle room in what "making his team members feel bad" could mean, and I would be surprised if constructively voiced criticism would have gotten someone written up.
With my experience of being written up for constructive criticism the reasoning was that I didn’t give constructive criticism to others and they felt singled out. I only give such criticism in private so of course they were not there to see the others. Apparently that wasn’t a sufficient explanation.
it is madness, you would be surprised how many ppl take things too serious. been there, had talk with HR cause i've said that the solution is mediocre and we have to do something better than that.
Does SteamOS count as something Carmack would discourage as well? Yes it's a Linux-based system and yes even based on an existing distro, but it is a purpose-specific OS and it seems like it's working well for Valve and people using it to play Windows games without Windows...
> I can only really see a new general purpose OS arriving due to essentially sacrificing a highly successful product’s optimality to the goal of birthing the new OS
tbh linux has quite a bit of cruft in it these days at the syscall and interface layer.
if youre apple, it does make sense to do stuff from scratch. i think in a way, software guys wind up building their own prisons. an api is created to solve problem X given world Y, but world Y+1 has a different set of problems - problems that may no longer be adequately addressed given the api invented for X.
people talk about "rewrite everything in rust" - I say, why stop there? lets go down to the metal. make every byte, every instruction, every syscall a commodity. imagine if we could go all the way back to bare metal programming, simply by virtue of the LLM auto-coding the bootloader, scheduler, process manager, all in-situ.
the software world is full of circularities like that. we went from Mainframe -> local -> mainframe, why not baremetal -> hosted -> baremetal?
Apple doesn't do a lot of baremetal development from scratch that I'm aware of. The Lightning to HDMI dongle bootstraps an XNU kernel with an AirPlay decoder into 256MB RAM, for instance.
You can still do “unsafe” stuff in rust, and people do. It’s perfectly possible to write safe C and C++ these days. And you don’t have to deal with a borrow checker, and a very small pool of developers available to hire.
oh, i didnt mean to invoke rust in any technical sense - i brought up rust to introduce an example of the attitude that rust people are known for, namely "why not rewrite everything?", which a lot of people have a kneejerk rejection of.
Was at Oculus post acquisition and can say that the whole XROS was an annoyance and distraction the core technology teams didn’t need. There were so many issues with multiple tech stacks that needed fixing first.
Mind you, this XROS idea came after Oculus reorged into FB proper. It felt to me like there were FB teams (or individuals) that wanted get on the ARVR train. Carmack was absolutely right, and after the reorg his influence slowly waned for the worse.
Just a small bunch of XROS people came from FB proper (mostly managers) because an average FB SWE has no required skills. Most folks were hired from the industry at E5/E6 and I think we had ever took one or two bootcampers that ultimately were not successful and quickly moved elsewhere in FB.
What were the required skills that bootcampers lacked? Has anybody without a university degree succeeded there?
2 replies →
It looks to me Meta was a victim of ivory tower researchers who just want to experiment on their non-practical theoretical research on company's expense.
It has some value a huge company funds these research, as long as it doesn't affect the practical real for-profit projects.
[flagged]
You can't attack other people like this on HN. Since you've been breaking the site guidelines in other threads as well, I've banned this account.
Please don't create accounts to break HN's rules with.
https://news.ycombinator.com/newsguidelines.html
John describes exactly what I'd like someone to build:
"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."
As a thought experiment:
* Pick a place where cost-of-living is $200/month
* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.
* Drop a load of computers with little to no software, and little to no internet
* Try reinventing the computing universe from scratch.
Patience is the key. It'd take decades.
Love this idea and wondering where that low cost of living place would be. But genuinely asking;
What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?
I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .
And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.
> Love this idea and wondering where that low cost of living place would be
Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)
> What problem are we trying to solve that is not possible right now?
The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.
Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.
Sharing technology back-and-forth a century later would be amazing.
Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.
And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.
More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)
You can't predict the future, and having two independent futures seems like a great way to have progress.
Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.
> Do we start from hardware at the CPU ?
For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.
Zero software, except what's needed to bootstrap.
Software is bloated and unreliable. It's clearly a "local minimum".
3 replies →
been writing an OS for ever 10 years to try.
its seriously not something you want to do if you want to get anywhere.
then again,its a lot of fun, maybe imagining where it could be some day if you had an army of slave programmers (because still it wont make money lol)
Continuing the thought experiment: There's an interesting sort-of contradiction in this desire: I, being dissatisfied with some aspect of the existing software solutions on the market, want to create an isolated monastic order of software engineers to ignore all existing solutions and build something that solves my problems; presumably, without any contact from me.
Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.
Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.
>maybe Microsoft should just stop using React for the Start menu, that might be a good start.
Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.
An isolated monastic order in the hills around the Himalayas should ideally be completely isolated from Overwatch and .docx files.
Not saying these are perfect, but consider reviewing the work of groups like the Internet Society or even IEEE sectors. Boots on the ground to some extent such as providing gear and training. Other efforts like One Laptop Per Child also leaned into this kind of thinking.
What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).
Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.
Love the experiment idea.
Pick a university, and given them $1B to never use Windows, MacOS, Android, Linux, or anything other than homebrew?
To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)
Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?
They will just face the same problems we solved decades ago and reinvent the mostly similar solution we also had decades ago.
In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.
Honestly sounds like a very cool Science fiction concept.
A bit like Anathem.
Not quite the same but check out A Canticle for Leibowitz
Who needs good schools? Make it "The Summer of code in Sardinia"
or "The Summer of code in Pyonyang"
1 reply →
I'd rather drop a load of musical instruments into said village but I guess I'm completely missing the point.
He might be describing Elbonia.
I want this job.
I've written a lot of low level software, BSPs, and most of an OS, and the main reason to not write your own OS these days is silicon vendors. Back in the day, they would provide you a spec detailed enough that you could feasibly write your own drivers.
These days, you get a medium-level description and a Linux driver of questionable quality. Part of this is just laziness, but mostly this is a function of complexity. Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
Intel still does it. As far as I can see they're the only player in town that provide open, detailed documentation for their high-speed NICs [0]. You can actually write a driver for their 100Gb cards from scratch using their datasheet. Most other vendors would either (1) ignore you, (2) make you sign an NDA or (3) refer you to their poorly documented Linux/BSD driver.
Not sure what the situation is for other hardware like NVMe SSDs.
[0] 2750 page datasheet for the e810 Ethernet controller https://www.intel.com/content/www/us/en/content-details/6138...
Wow... that PDF is 2,750 pages! There must be an army of technical writers behind it. That is an incredible technical achievement.
Real question: Why do you think Intel does this? Does it guarantee a very strong foothold into data center NICs? I am sure competitors would argue two different angles: (1) this PDF shares too much info; some should be hidden behind an NDA, (2) it's too hard to write (and maintain) this PDF.
29 replies →
The NVMe spec is freely downloadable and sufficient to write a driver with, if your OS already has PCIe support (which doesn't have open specifications). You don't need any vendor-specific features for ordinary everyday use, so it's a bit of a different situation from NICs. (Also, NVMe was in very large part an Intel creation, though it's maintained by an industry consortium.)
On the other hand, see the complete mess that are the IPU6/7 camera chipsets and their Linux support.
1 reply →
That's interesting that it's that short. I remember a long while ago I had aspirations of implementing a custom board for Prestonia-/Gallatin-era Xeons and the datasheets and specs for those was around 3000 pages, iirc. Supporting infra was about that long as well. So I'm surprised to see a modern ethernet controller fit into the same space. I appreciated all of the docs because it was so open, I felt like I could actually achieve that project, but other things took priority.
Yeah this. I tried to modify a hobby OS recently so it would process the "soft reboot" button (to speed up being rebooted in GCP) and it was so unbelievably hard to figure out how to support it. I tried following the instructions on the OS Dev Wiki and straight up reading what both Linux and FreeBSD do and still couldn't make progress. Yes. The thing that happens when you tell Windows or Linux to "restart". Gave up on this after spending days on it.
The people who develop OSes are cut from a different cloth and are not under the usual economic pressures.
I also think that they have access to more helpful resources than people outside the field do, e.g. being able to contact people working on the lower layers to get the missing info. These channels exist in the professional world, but they are hard to access.
To clarify, are you having trouble getting the signal to reboot from the gcp console into your OS? Or are you having trouble rebooting on gcp?
1 reply →
The VMM on GCP has only really been tested with Linux. You are kinda wasting your time, the only way to make it work is to make the hobby OS Linux.
11 replies →
Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
That's what's claimed. That's what people say, yet it's just an excuse. I've heard the same sort of excuse people have, after they write a massive codebase, then say "Oops, sorry, didn't get around to documenting it".
And no, hardware is not more difficult than software to document.
If the system is complex, there's more need to document, just as with a huge codebase. On their end, they have new employees to train up, and they have to manage testing. So any excuse that silicon vendors have to deal with such immense complexity? My violin plays for them.
> "Oops, sorry, didn't get around to documenting it".
That's obviously the wrong message. They should say "Go ask the engineering VP to get us off any other projects for another cycle while we're writing 'satisfying' documentation".
Extensive documentation comes at a price few companies are willing to pay (and that's not just a matter of resources. Look at Apple's documentation)
10 replies →
> If the system is complex, there's more need to document
It’s not first party documentation that’s the problem. The problem is that they don’t share that documentation, so in order to get documentation for an “unsupported” OS a 3rd party needs to reverse engineer it.
I find myself largely unable to document code as I write it. It all seems obvious at the time. It's when I go back to it later, and I re-figure it out, that the documentation then can be written.
My hunch is that for nearly anyone who is serious about it these days, the way forward is either to have unusually tight control over the underlying platform, or to include a servant Linux installation with your OS. If Windows is a buggy set of device drivers, then Linux is a free set of buggy device drivers. If you're happy with your OS running as a client of a Linux hypervisor indefinitely then you could go for that; otherwise you'd have to try to gradually move bits of the hardware support into your OS over time—ideally faster than new Linux dependencies arise...
At least for certain types of OSes, it should be relatively easy to get most of Linux's hardware support by porting LKL (https://github.com/lkl/linux) and adding appropriate hooks to access hardware.
Of course, your custom kernel will still have to have some of its own code to support core platform/chipset devices, but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
Also, it probably wouldn't work so well for typical monolithic kernels, but it should work decently on something that has user-mode driver support.
>but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).
thus calling into question why you ever bothered writing a new kernel in the first place if you were just going to piggyback Linux's device drivers onto some userspace wrapper thingy.
Im not necessarily indoctrinated to the point where I can't conceive of Linux being suboptimal in a way which is so fundamental that it requires no less than a completely new OS from scratch but you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
5 replies →
Presumably if you’re meta you could pay the vendors enough to write drivers for any arbitrary OS
Writing drivers is easy, getting vendors to write *correct* drivers is difficult. At work right now we are working with a Chinese OEM with a custom Wifi board with a chipset with firmware and drivers supplied by the vendor. It's actually not a new wifi chipset, they've used it in other products for years without issues. In conditions that are difficult to reproduce sometimes the chipset gets "stuck" and basically stops responding or doing any wifi things. This appears to be a firmware problem because unloading and reloading the kernel module doesn't fix the issue. We've supplied loads of pcap dumps to the vendor, but they're kind of useless to the vendor because (a) pcap can only capture what the kernel sees, not what the wifi chipset sees, (b) it's infeasible for the wifi chipset to log all its internal state and whatnot, and (c) even if this was all possible trying to debug the driver just from looking at gigabytes of low level protocol dumps would be impossible.
Realistically for the OEM to debug the issue they're going to need a way to reliably repro which we don't have for them, so we're kind of stuck.
This type of problem generalizes to the development of drivers and firmware for many complex pieces of modern hardware.
2 replies →
But is that a good use of Meta's money? Compared to making a few patches to Linux to fix any performance problems they find.
(And I feel bad saying this since Meta obviously did waste eleventy billion on their ridiculous Second Life recreation project ...)
8 replies →
XROS had a completely new and rapidly evolving system call surface. No vendor would've been able to even start working on a driver for their device, let alone hand off a stable, complete result. It wasn't a case of "just rename a few symbols in a FreeBSD implementation and run a bunch of tests".
Things you can’t buy: vendor who cares enough to replicate your exact use cases in their lab
Vendors might say that they don't have the resources (man hours) and don't want to hand over documentation to external developers.
>> These days, you get a medium-level description and a Linux driver of questionable quality.
Then how do devices end up up having drivers for major OSes? It's all guesswork?
Yeah reverse engineering all the drivers is going to be a huge headache.
Sounds like super fun if I could be paid a bit for it.
What is an easy gate task to get into “reverse engineering some drivers for some OS”?
Second thought: I don’t even know how to write a driver or a kernel, so I better start from there.
6 replies →
> Modern hardware is just so complicated it would take a long time to completely document, and even longer to write a driver for.
You know, one'd think that having a complex hardware should make writing a driver easier because the hardware is able to take care of itself just fine, and provide a reasonable interface, as opposed to devices of the yore which you had to babysit, wasting your main CPU's time, and doing silly stuff like sending them two identical initialization commands with 30 to 50 microseconds delay between or whatever.
No, the complexity usually isn't hidden. It's the driver's job to do that.
I guess one exception maybe is Nvidia who have sort of hidden the complexity by moving most driver functionality onto software on the card. At least that's how I understood it. Don't quote me on that.
3 replies →
heh, in mid-2000s all I had were a batch of misbehaving SATA controllers under freebsd, and an (actually quite well-written core of a) linux driver was all I had to work with.
Without that, we would have probably just switched hw, because the quite obscure bug was in the ASIC, and debugging that on 2005-6-ish hw is just infeasible.
It’s entirely laziness.
Wouldn’t LLMs make it way easier
LLMs trust the docs. This is a rookie mistake in driver development, especially on prerelease hardware
I think this is one area where LLMS would be particularly bad at. Opaque code with no documentation across the field.
1 reply →
Only if you are an expert who wants to use time debugging LLM code rather than coding it yourself.
PS: Half-joking, you can write some big portions with LLMs but the point stands.
The problem that is kind of glossed over here is that Meta hired a bunch of folks from Microsoft who were primarily interested in writing operating systems, and set them to work on XR - obviously they wanted to write a custom operating system
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
I've only seen John Carmack's public interactions, but they've all been professional and kind.
It's depressing to imagine HR getting involved because someone's feelings had been hurt by an objective discussion from a person like John Carmack.
I'm having flashbacks to the times in my career when coworkers tried to weaponize HR to push their agenda. Every effort was eventually dismissed by HR, but there is a chilling effect on everyone when you realize that someone at the company is trying to put your job at stake because they didn't like something you said. The next time around, the people targeted are much more hesitant to speak up.
I followed his posts internally before he left. He was strict about resource waste. Hand tracking would break constantly and he brought metrics to his posts. His whole point was that Apple has hardware nailed down and it’ll be efficient software that will be the differentiator. The bloat at Meta was the result of empire building.
I remember watching Carmack at a convention 15 years ago. He took a short sabbatical and came back with ID Tech 3 on an iPhone, and it still looks amazing well over a decade later.
https://www.youtube.com/watch?v=52hMWMWKAMk&t=1s
This is a guy who figures that what he wants to do most with his 3 free weekends is to port his latest, greatest engine to a Cortex-A8. Leading corporate strategy? Maybe not. But Carmack on efficiency? Just do it.
11 replies →
I followed his posts internally too. It's amazing how many people were arguing against fucking John Carmack. What a waste of talent.
15 replies →
The software for the Quest 3 is unreliable and breaks often. A team that attacks attempts to hold them accountable makes a lot of sense.
1 reply →
I saw a few of those. He really leaned in on just how much waste was in the UI rendering, with some nasty looking call times to critical components. I think it was close to when he left.
Dude just seemed frustrated with the lack of attention to things that mattered.
But...that honestly tracks with Meta's past and present.
1 reply →
John can be quite blunt and harsh in person, from everyone I know who’s interacted with him.
If he doesn’t believe in something, he can sometimes be over critical and it’s hard to push back in that kind of power imbalance.
Carmack is a legend and I admire his work, but he seems to believe his own legend these days (like a few others big-ego gamedevs) and that can lead to arbitrary preferences being sold as gospel.
6 replies →
[flagged]
5 replies →
Which makes sense when you are one of 3 developers at ID software. There's absolutely no room for waste.
This is Meta. Let the kids build their operating system ffs. Is he now more concerned with protecting shareholder value? Who cares.
15 replies →
This is what got Lucovsky pushed out. He wanted to build OS from scratch and couldn't see past the technical argument and acknowledge the Product's team urgency to actually land something in the hands of customers. Meanwhile, he left a trail of toxicity that he doesn't even realize was there[0].
Interestingly, he was pulling the same bs at Google until reason prevailed and he got pushed out (but allowed to save face and claim he resigned willingly[1]).
[0] https://x.com/yewnyx/status/1793684535307284948 [1] https://x.com/marklucovsky/status/1678465552988381185
I saw the same thing at Google. A distinguished engineer tried gently at first to get a Jr engineer to stop trying to do something that was a bad idea. They persisted so he told them very bluntly to stop. HR got involved.
I even found myself letting really bad things go by because it was just going to take way to much of my time to spoon feed people and get them to stop.
What kind of thing is bad enough that it warrants multiple discussions without the junior engineer getting the hint that it’s a bad idea?
2 replies →
I have mixed feelings about this. In one part, JC is someone I look up to, at least from the perspective of engineering. On the other hand, putting myself in the shoes in someone who got the once in life chance to build a new OS with corp support for a new shiny device…I for hell would want to do this.
Look at the outcome of Meta's performance in AR/VR over the past few years: a fortune has been spent; relatively little has been achieved; the whole thing is likely about to be slashed back; VR, something Carmack believes in, remains a bit commercially marginal and easily dismissed; and Carmack's own reputation has taken a hit from association with it all. You can understand perfectly well why he doesn't feel that it would have been harmless to just let other people have whatever fun they wanted with the AR/VR Zuckbucks.
(Mind you, Carmack himself was responsible for Oculus' Scheme-based VRScript exploratory-programming environment, another Meta-funded passion project that didn't end up going far. It surely didn't cost remotely as much as XROS though.)
3 replies →
Reading on from that he says:
> If the platform really needs to watch every cycle that tightly, you aren't going to be a general purpose platform, and you might as well just make a monolithic C++ embedded application, rather than a whole new platform that is very likely to have a low shelf life as the hardware platform evolves.
Which I think is agreeable, up to a certain point, because I think it's potentially naive. That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
1 reply →
Exactly! It seems very narc-y. Just let me build my cool waste of company resources, it's not like Zucky is going to notice, he's too busy building his 11 homes.
Imagine being able to build an operating system, basically the end-game of being a programmer, and get PAID for it. Then some nerd tells on you.
9 replies →
I got the chance to do this at Microsoft, it is indeed awesome! Thankfully the (multiple!) legendary programmers on the team were all behind the effort.
Anyway, if anyone reading this gets a chance to build a custom OS for bespoke HW, and get paid FAANG salary to do so, go for it! :-D
If you want to do it you should be able to defend it against contrarian arguments that it’s a waste of time and company resources.
Yup. This is how bloat is created.
meta was a weird place for a while. because of psc (the performance rating stuff) being so important… a public post could totally demoralize a team because if a legend like carmack thinks that your project is a waste of resources, how is that going to look on your performance review?
impact is facebook for “how useful is this to the company” and its an explicit axis of judgement.
How large is their headcount these days? And how many actually useful products have they launched in the last decade? You could probably go full Twitter and fire 90% of the people, and it would make no difference from a user perspective.
1 reply →
But... That's not an HR violation. If something a team is working on is a waste of resources, it's a waste. You can either realize that and pivot to something more useful (like an effort to take the improvements of the current OS project and apply them to existing OSes), or stubbornly insist on your value.
Why is complaining to HR even an option on the table?
21 replies →
Facebook has literally done very little in terms of new breakthrough products in a decade at least, and Bytedance has apparently just beat them on revenue.
Yeah, people getting really angry if you say anything bad about a product (!) is a depressing commonality in certain places these days.
I got angry emails from people because I wrote "replacing a primary page of UI with this feature I never use doesn't give me a lot of value" because statements like that make "the team feel bad". It was an internal beta test with purpose of finding issues before they go public.
Not surprisingly, once this culture holds root, the products start going down the drain too.
But who cares about good products in this great age of AI, right?
When I compare workplace dynamics in the American company I work for with local company a friend of mine works for, I feel like I sold my soul to the devil.
Masters of doom portrays carmack as a total dictator of a boss. Doom Guy by John Romero seems to back this up
Masters of Doom does seems to want to, however accurately or not, set Carmack up as the antagonist of its story against Romero as the hero sometimes. I think that readers just largely didn't notice that since Carmack's heroic image was already so firmly established. In fact some of the early-ID stuff really does seem to raise some questions. (Was Tim Willits mostly Carmack's protégé, for instance?)
1 reply →
[dead]
> I've only seen John Carmack's public interactions, but they've all been professional and kind.
You don't know someone or how they really behave because they are a public figure.
I’ve been on both the same side and the opposing side of debates with him, both in person and over internal discussion threads. His public persona and private behavior match. I viewed it positively, though per the topic of the thread, not everyone did.
1 reply →
meta tends to keep people so on edge, with performance so heavily based on peer agreement, that it creates a sort of defensive toxic positivity
a little bit a negative feedback at high level can domino quickly too. massive pivots, reorgs, the works.
If you're in high leadership, even just being pessimistic can be a massive morale killer. It doesn't mean that going to HR is the right call but I could see how someone would vent that way.
If you are senior leadership and you find that your org has some people do useless side projects for fun (and tons of money) what delivers no value, your job is to solve this problem by reassigning or firing them.
Facebook VR never needed a new OS in the first place. It needed actual VR.
Hehehe. I have talked to John Carmack a few times. He's super harsh and has zero filter or social niceties (Azperger's level, not that he is, but just sayin'). If you are not used to it or understand where it's coming from, it can be quite a shock. Or at least he was, many years ago. Maybe he's changed.
I can see that. Sadly, there are a lot of people in the world who simply don't know how to deal with people who can be direct, if not somewhat abrasive, in their communication style. Their intent can be noble, well-intentioned, and not meant to offend. They simply don't beat around the bush or worry about whether your fragile ego will be bruised when they make an observation.
I've had to coach people and help them understand the entitlement involved in demanding that everyone adjust and adhere to their personal preferences and communication style. In my experience, it's about seeking to understand the person and adapt accordingly. Not everyone is willing to do that.
10 replies →
It is very much not an objective discussion if you are discussing whether it makes sense to develop a new operating system.
How is it not?
6 replies →
Sorry but if you know his story, seen candid videos of him, or talked to the people around him, he's a Linus-level "I'll say what I want" type.
There weird hagiographies need to go. Carmack is absolutely not known to be kind. I have no idea what happened here but the idea that's he's this kindly old grandpa who could never, ever be rude or unprofessional is really out there.
And stupid. Like it or hate it, a non-nonsense, direct speaking, but fair and objective boss is the one you want. No one is served by failure; not the people at the top, nor the people at the bottom.
There is a difference between “this project is not going to work” vs “these people are incompetent and the project should be cancelled as a result”. The former needs to be said, the latter is a HR violation.
2 replies →
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
This is one of the reasons I’m sick of working pretty much anywhere anymore: I can’t be myself.
Appreciating people for their differences when they are humble and gifted is easy. I side with liberals, but I have a mix of liberal, moderate, and conservative friends.
But there are only so many years of pretending to appreciate all of the self-focused people that could be so much better at contributing to the world if they could quietly and selflessly work hard and respect people with different beliefs and backgrounds.
I’m happy for the opportunity I have to work, and I understand how millennials think and work. But working with boomers and/or gen X-ers would be so much less stressful. I could actually have real conversations with people.
I don’t think the problem is really with HR. I think the problem is a generation that was overly pandered to just doesn’t mix with the other generations, and maybe they shouldn’t.
If the younger generation is too pandered and can’t take criticism or honest feedback, thats the fault of the older generation.
I think the issue is, Carmack didn't talk like a "normal" facebook engineer.
Supposedly you were meant to have you disagreements in private, and come to support what ever was decided. "hold your opinions lightly" The latest version of it was something like "disagree and commit".
This meant that you got a shit tonne of group think.
This pissed off Carmack no end, because it meant shitty decisions were let out the door. He kept on banging on about "time to fun". This meant that any feature that got in the way of starting a game up as fast a possible, would get a public rebuke. (rightly so)
People would reply with "but the metric we are trying to move is x,y & z" which invariably would be some sub-team PSC (read promotion/bonus/not getting fired system) optimisation. Carmack would basically say that the update was bad, and they should feel bad. This didn't go down well, because up until 2024 one did not speak negatively about anything on workplace. (Once carmack reported a bug to do with head tracking[from what I recall] there was lots of backwards and forwards, with the conclusion that "won't fix, dont have enough resources". Carmack replied with a diff he'd made fixing the issue.)
Basically Carmack was all about the experience, and Facebook was all about shipping features. This meant that areas of "priority" would scale up staffing. Leaders distrusted games engineers("oh they don't pass our technical interviews"), so pulled in generalists with little to no experience of 3D.
This translated in small teams that produced passable features growing 10x in 6 months and then producing shit. But because they'd grown so much, they constantly re-orged pushed out the only 3d experts they had, they could then never deliver. But as it was a priority, they couldn’t back down
This happened to:
Horizons (the original roblox clone)
video conferencing in oculus
Horizons (the shared experience thing, as in all watching a live broadcast together)
Both those horizons (I can't remember what the original names were) Were merged into horizons world, along with the video conferencing for workplace
originally each team was like 10, by the time that I left, it was something like a thousand or more. With the original engineers either having left or moved on to something more productive.
tldr: Facebook didn't take to central direction setting, ie before we release product x, all its features must work, be integrated with each other, and have a obvious flow/narrative that links them together. Carmack wanted a good product, facebook just wanted to iterate shit out the door to see what stuck.
Mechanisms for getting the linux kernel out of the way is pretty decent these days, and CPUs with a lot of cores are common. That means you can isolate a bunch of cores and pin threads the way you want, and then use some kernel-bypass to access hardware directly. Communicate between cores using ring buffers.
This gives you best of both worlds - carefully designed system for the hardware with near optimal performance, and still with the ability to take advantage of the full linux kernel for management, monitoring, debugging, etc.
> use some kernel-bypass to access hardware directly
You can always mmap /dev/mem to get at physical memory.
No, that's not really what kernel bypass means.
1 reply →
I was at Google when the Flutter team started building Fuchsia.
They had amazing talent. Seriously, some of the most brilliant engineers I've worked with.
They had a huge team. Hundreds of people.
It was so ambitious.
But it seemed like such a terrible idea from the start. Nobody was ever able to articulate who would ever use it.
Technically, it was brilliant. But there was no business plan.
If they wanted to build a new kernel that could replace Linux on Android and/or Chrome OS, that would have been worth exploring - it would have had at least a chance at success.
But no, they wanted to build a new OS from scratch, including not just the kernel but the UI libraries and window manager too, all from scratch.
That's why the only platform they were able to target was Google's Home Hub - one of the few Google products that had a UI but wasn't a complete platform (no third-party apps, for example). And even there, I don't think they had a compelling story for why their OS was worth the added complexity.
It boggles my mind that Fuchsia is still going on. They should have killed it years ago. It's so depressing that they did across-the-board layoffs, including taking away resources from critically underfunded teams, while leaving projects like Fuchsia around wasting time and effort on a worthless endeavor. Instead they just kept reducing Fuchsia while still keeping it going. For what?
Not only did they target Home Hub, they basically forced a rewrite on it (us, I worked on the team). After we already launched. And made our existing workable software stack into legacy. And then they were late. Then late again. And late again. With no consequences.
100% agree with your points. To me watching I was like -- yeah, hell, yeah, working on an OS from scratch sounds awesome, those guys have an awesome job. Too bad they're making everyone else's job suck.
By forced I guess you’re referring to the room full of leads who all said yes, but then reported otherwise back down to their ics to avoid retribution. I caught early wind of this from folks being super rude in early on the ground discussions and tried to raise it with Linus. One of the directors got his kickers in a twist and accused me of making a mountain out of a molehill. I guess clearly not, as the sentiment and division still stands.
2 replies →
Other teams decommitting is just how it goes.
It's a lot of work and hard to justify if you're looking for short term improvements. But if you're really committed to long term improvements, it absolutely makes sense. Google is actually willing to make long term investments. Publicly justifying the investment has never been a goal of the project which is why most folks probably don't understand it. Honestly I'm not sure why folks care enough to even do commentary on it. If you find it useful, you can participate, if not just ignore it.
Fwiw inventing a new application ecosystem has never been a goal and is therefore not a limitation for its viability. The hard part is just catching up to all the various technologies everyone takes for granted on typical systems. But it's not insurmountable.
I'm also not sold on the idea that having more options is ever a bad thing. People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
> People always talk about web browser monoculture and cheer on new entrants, yet no one seems to mind the os monoculture. We will all come out ahead if there are more viable OS out there to use.
3 main OSes vs 2 main browser engine for consumer to choose from?
Anyway the main issue with the Browser engine consolidation is that whoever owns the Browser engine, can make or break what goes in there. Just think about VSCode's current status with all the AI companies wanting to use it and make it their own product, while MSFT attempting to curtail it. At some point either MSFT decide it commit to FOSS on this one, or the multiple forks will have to reimplement some functionalities.
I think the hope is that you just start there. They might have migrated the meeting room devices. Why would you set out to replace *everything* at once? Do something, get some revenue/experience, then try to fan out.
Wasn’t Fuchsia supposed to be a platform where different OS could run in a virtual environment and software packages would be complete containers? Was not this a new way of tackling the ancient OS problem?
These were my imaginations. I thought maybe an OS that could run on the web. Or an OS that could be virtualized to run on several machines. Or an OS that could be run along several other instances on the same machine each catering to a different user.
That doesn't sound anything like what fuchsia is or ever was. Fuchsia takes a different set of tradeoffs with respect to baseline primitives and built a new stack of low level user space on top of those new primitives. This gives the software fundamentally different properties which might be better or worse for your use case. For consumer hardware products I think it comes out ahead, but only time will tell.
1 reply →
Reinventing QNX will be cutting edge for decades to come.
Yeah, those were definitely your imaginations.
I always felt that Fuchsia was a make-work program to keep talented kernel engineers away from other companies. Sort of a war by attrition.
That's a weird rumor that I'm not sure I understand. Things are not that complicated.
2 replies →
And the crazy thing is there is arguably a lot more of a reason for Meta / Oculus to have had its own operating system because it is meant for a specific configuration of hardware and to utilize those hardware resources to a quite different goal than most other OSes out there. Even in that environment it was still a waste
I guess it's just a political shit show at this point. Ideas go hard if the people behind them aren't playing the game well enough, no matter their value.
There's few things worse for the long-term health of a software project than people who have hammers and are hunting for nails for them.
1 reply →
My understanding is that people are working on Fuschia in name only at this point. Of course some people are passionate enough to try and keep it alive, but it’s only useful to the degree that it can help the Android team move faster.
I always wonder why companies prefer rolling the dice to pragmatism.
A bad business decision, yes. But is it any good?
Back in mmm like 2002 or 2003 or 2004 while at Microsoft I read an internal paper from a few OS guys who hackathoned something for Bill Gates's Think Week (which is when he used to go to some island in San Juans or somewhere similar and just read curated papers and think, it was a huge prestige to get such a paper to him) and that something was an OS written from scratch with GC and memory management on top of something very .NET framework'y (which was released a couple of years ago. They had it booting on all kinds of hardware and doing various neato things. One of explicitly called design principles was 0 compatibility with anything Windows before. Which is why it didn't go anywhere I assume. I remember it was just a handful of engineers (presumably OS folks) hacking for like a month. . It was awesome to read about.
Was it Singularity?
https://en.wikipedia.org/wiki/Singularity_(operating_system)
https://www.microsoft.com/en-us/research/project/singularity...
Singularity was cool. I'm sad that it was abandoned. The concept of using software isolation instead of hardware memory protection was really interesting.
It was a multi-year project at Microsoft Research with a team of >100 developers.
https://www.zdnet.com/article/whatever-happened-to-microsoft...
I am very certain in my recollection that this was started much earlier than this as hackathon skunkworks before something like this happened at MSR. It didn't do anything beyond kernel and command line, there was no browser. I don't know if those two shared roots either. Anyhow, but yeah, still both were intellectual feats!
> my old internal posts... got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
That jives with my sense that META is a mediocre company
It matters who you communicate concerns to. Something as fundamental as "I think that your team shouldn't even exist" should go to the team leads and their managers exclusively at first. Writing that to the entire affected team is counterproductive in any organization because it unnecessarily raises anxiety and reduces team productivity and focus. Comments like this from influential people can have big mental and physical health impacts on people.
This entire situation looks very suspicious. Was Carmack even responsible for triaging research projects and allocating resources for them? If yes, then he should have fought that battle earlier. If no, then the best he could do is to refuse to use that OS in projects he controls.
4 replies →
Not when this is his personal opinion he thought nothing should follow from.
"I think that your team shouldn't even exist" doesn't mean "I want your team to no longer exist.".
2 replies →
If I was on that team I'd welcome the opportunity to tell John Carmac why he was wrong or if I agreed start looking for another project to work on.
When I was on nuclear submarines we'd call what you are advocating "keep us in the dark and feed us bullshit."
3 replies →
Maybe on a mediocre team. But that was the parent comment's point.
On well-functioning teams, product feedback shouldn't have to be filtered through layers of management. In fact, it would be dishonest to discuss something like this with managers while hiding it from the rest of the team.
> Comments like this from influential people can have big mental and physical health impacts on people.
So what are we supposed to do? Just let waste continue? The entire point of engineering is to understand the tradeoffs of each decision and to be able to communicate them to others...
I'm sure that kind of crap helped nudge JC out of there. He mentions (accurate and relevant) reasons why something is probably a bad idea, and the person in charge of doing it complains that JC brought up the critiques, rather than addressing the critiques themselves. What a pathetic, whiny thing to do.
You've got to remember that context is critical with stuff like this.
There's nothing wrong with well-founded and thoughtful criticism. On the other hand, it is very easy for this to turn into personal attacks or bullying - even if it wasn't intended to be.
If you're not careful you'll end up with juniors copying the style and phrasing of less-carefully-worded messages of their tech demigod, and you end up with a huge hostile workplace behaviour cesspit.
It's the same reason why Linus Torvalds took a break to reflect on his communication style: no matter how strongly you feel about a topic, you can't let your emotions end up harming the community.
So yes, I can totally see poorly-worded critiques leading to HR complaints. Having to think twice about the impact of the things you write is an essential part of being at a high level in a company, you simply can't afford to be careless anymore.
It's of course impossible to conclude that this is what happened in this specific case without further details, but it definitely wouldn't be the first time something like this happened with a tech legend.
Ugly people like to blame the mirrors.
What would be the real advantage of a custom OS over a Linux distribution?
The OS does process scheduling, program management, etc. Ok, you don’t want a VR headset to run certain things slowly or crash. But some Linux distributions are battle-tested and stable, and fast, so can’t you write ordinary programs that are fast and reliable (e.g. the camera movement and passthrough use RTLinux and have a failsafe that has been formally verified or extensively tested) and that’s enough?
I think the proper comparison point here is probably what game consoles have done since the Xbox 360, which is basically run a hypervisor on the metal with the app/game and management planes in separate VMs. That gives the game a bare metal-ish experience and doesn't throw away resources on true multitasking where it isn't really needed. At the same time it still lets the console run a dashboard plus background tasks like downloading and so on.
Hold on a sec, is that the same on PS5? I am pretty sure that wasn't the case two generations ago. Is that the norm now, running on hypervisor ?
1 reply →
For this use case a major one would be better models for carved up shared memory with safe/secure mappings in and out of specialized hardware like the gpu. Android uses binder for this and there are a good number of practical pains with it being shoved into that shape. Some other teams at Google doing similar stuff at least briefly had a path with another kernel module to expose a lot more and it apparently enabled them to fix a lot of problems with contention and so on. So it’s possible to solve this kind of stuff, just painful to be missing the primitives.
Based on the latter tweet in the chain, I'm wondering if Carmack is hinting that Foveated Rendering (more processing power is diverted towards the specific part of the screen you're looking at) was one advantage envisioned for it. But perhaps he's saying that he's not so sure if the performance gains from it actually justify building a custom OS instead of just overclocking the GPU along with an existing OS?
Wouldn't that be an application (or at most system library) concern though? The OS is just there to sling pixels, it wouldn't have any idea whether those pixels are blurry… well for VR it would all be OpenGL or equivalent so the OS just did hardware access permissions.
1 reply →
Just overclock (more) the system that’s already in a severe struggle to meet power, thermal and fidelity budgets?
Maybe not applicable for the XR platform here, but you could add introspection capabilities not present in Linux, a la Genera letting the developer hotpatch driver-level code, or get all processes running on a shared address space which lets processes pass pointers around instead of the Unix model of serializing/deserializing data for communication (http://metamodular.com/Common-Lisp/lispos.html)
You can do that on Linux today with vfork.
I stated this elsewhere, but at least six years ago a major justification was a better security model. At least that’s what Michael Abrash told me when I asked.
Think you answered your own question. No real differences except more articles, $, and hype
And, let's be real here: engineering prestige.
Everyone wants to make an OS because that's super cool and technical and hard. I mean, that's just resume gold.
Using Linux is boring and easy. Yawwwwn. But nobody makes an OS from scratch, only crazy greybeard developers do that!
The problem is, you're not crazy greybeard developers working out of your basement for the advancement of humanity. No. Youre paid employees of a mega corporation. You have no principles, no vision. You're not Linus Trovalds.
My objection is that there is no universe in which Meta can be trusted with direct access to your raw gaze tracking data.
The only thing I can imagine that would be more invasive would require a brain implant.
My understanding is that this is a key tenant of visionOS’s design, where apps don’t get access to gaze data (I think unless they’re taking over the full screen?)
sadly they are working on it
Huawei seem pretty committed to building their own OS and uncoupling from the Western technology stack in total
https://en.wikipedia.org/wiki/HarmonyOS_NEXT https://www.usenix.org/conference/osdi24/presentation/chen-h...
The only reason Chinese companies can even get away with these big projects is because of state backing and state objectives. By itself, the market doesn't support a new general-purpose OS at this point.
> because of state backing and state objectives
MS is a state backed company. Very natural that China went the same path.
12 replies →
'China only succeeds for evil reasons'
Besides, the statement's completely nonsensical - there were multiple OSes developed by for-profit corporations in the West (Microsoft, Apple, Nintendo, QNX, Be, etc.).
It's kind of an extraordinary statement that an OS couldn't be developed by a for-profit organization, especially if the hardware's somewhat fixed and you don't need to support every piece of equipment under the sun.
Actually the “market” won’t prioritize anything that won’t give returns as soon as possible (except for the weird situation of VC money being poured in).
You're downvoted but you're 100% correct.
It makes absolutely zero financial sense to create a new general purpose operating system.
That's billions of lines of code. With a B. And that's just the code - getting it to work with hardware?
Do YOU want to talk to 10,000 hardware vendors and get them on board? No! Nobody does! That's just money burning!
But, there are valid political reasons for creating a new general purpose OS.
2 replies →
lol the market has tons of support for OS that can't be sanctioned, especially Huawei, who you know is.
They actually reuse Linux driver stack for hardware compatibility
Geopolitical reasons for making your own OS are actually reasonable and understandable. Not saying they are good, because I would much prefer a planet where we collaborate on these things… but they’re not dumb. They make sense in a similar way the space race made sense.
> I wish I could drop (so many of) my old internal posts publicly, since I don’t really have the incentive to relitigate the arguments today – they were carefully considered and prescient. They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad, but I expect many of them would acknowledge in hindsight that the Meta products would not be in a better place today if the new OS effort had been rammed into them.
So someone at Meta was so sensitive that being told their behemoth of a project was ill advised ended up getting reported to HR?
Yep, I can 100% believe it having worked in such environments
I was there when they wanted to do the custom XROS. I remember asking them in a Q&A session exactly why they would build this and I recall the reasoning behind it totally fell flat. Fundamentally it became clear these guys just wanted to write a new OS because they thought it would be cool or fun.
Much of the scenarios they tried to address could have been done with Mach or some realtime kernel or with fuchsia. I recall later on they did consider using fuchsia as the user space for the os for some time.
On another note, there was similarly an equally “just for fun” language effort in the org as well (eg “FNL”). That was also conceived by a bunch of randos who weren’t a bunch of compiler guys that had no product vision and just did it for fun.
Well when the era of efficiency arrived all of this stuff ended.
Late 2019 I had a short conversation with Abrash about a new OS for the next set of glasses and my immediate reaction was “why?” He was adamant that there was a security need which Linux could not fill (his big concern was too much surface area for exploits in the context of untrusted 3rd party code). I remember thinking that this would be a surprise to cloud engineers at the big hosters, but chose not to continue the argument. He didn’t get where he is by being dramatically wrong very often, after all, but it still struck me as a waste. Note I did not work at Meta so he may have had stronger justifications he chose not to expose.
I worked on a completely different hardware project within meta and while they didn't want a custom OS, they used an off the shelf rtos with the intention of modifying it and it was a shit show. They had a million justifications for why they needed it, but they had no performance tests or metrics to verify to actually justify it. They incurred a huge development overhead for no verifiable performance improvements.
None of the code they wrote couldn't have just been written as a kernel module in Linux. It would've also been so much easier due to all the documentation and general knowledge people have about Linux both within the company and outside the company.
You could write a book on why it's practically impossible to create a new OS these days. Love Carmack for stating it so clearly. I also love that called out TempleOS, I also have a weird respect for it. Plan 9 is the probably the best example of a totally new OS and I hope someday it becomes viable because it's really a joy to use.
But ultimately it just makes sense to adapt existing kernels / OS (say, arch) and adapt it to your needs. It can be hair wrenchingly frustrating, and requires the company to be willing to upstream changes and it still takes years, but the alternative is decades, because what sounds good and well designed on paper just melts when it hits the real world, and linux has already gone through those decades of pain.
OS isnt the hard part.
The driver ecosystem is the moat. Linux finally overcame it decades later
Android built a new, giant moat for Linux (or "Linux" depending on your opinions about Android) in the embedded application processor space - now the "standard" board support package target for new embedded AP hardware is almost always some random point-in-time snapshot of Android. Running "mainline" Linux is hard (because the GPU and media peripheral drivers are usually half-userspace Android flavored stuff and rely on ION and other Androidisms) and bare-metal is even worse (where previously, you'd get register-level documentation, now, you get some Android libXYZ.so library).
1 reply →
Yeah, the Linux kernel has ~12m lines of code. <1m are the core, the rest are drivers.
7 replies →
I would read that book.
There is also ACPU OS https://www.acpul.org/blog/so-fast
Writing TempleOS software taught me lower-level programming! The OS is weird and idiosyncratic, but much more polished and logical than you'd expect from seeing videos of its author.
I think people have forgotten about Google Fuchsia which I guess is a good sign for a new OS. They’ve done quite well in deploying it seamlessly to their consumer devices.
"Quite well" by what metric? It shipped on one device. That's pretty much the lowest bar you can imagine! Did it provide any tangible benefit to anyone? Let alone a benefit commensurate with the enormous cost of developing it and continuing to maintain it?
I think it was insane to start a new OS effort written in C/C++. We have plenty of OSes written in C/C++! We know how that story ends. If you're going to spend the effort, at least try a new language that could enable a better security model.
While I agree with the sentiment given my bias towards safe systems languages, Genode OS is pretty much mostly C++, although they added some Ada/SPARK as well, which is relatively recent research OS.
It was so good Google cancelled plans to use it in meaningful products and instead delegated it to the bottom shelf products.
Fun rumor: Google shut down the AR effort and transferred the team to project Fuchsia as a way to retain highly skilled employees. So essentially they didn’t have any real technical needs for a new OS.
Fuchsia is pretty much a dead product at this point. For things like Phones and Laptops google is using only Android going forward.
I thought they rolled back all of those efforts. What devices shipping today come with Fuchsia installed?
According to Wikipedia, looks like only the Nest Nub.
1 reply →
Google Nest Hub
Isn't that somewhat debatable? Originally they were aiming at much more (chromebook OS for example) but seems like they settled for Google Home only as their scope.
Still a very interesting project, but that feels like a similar story, for limited use cases (a smart thermostat/speaker with specific hardware) it works, but for wider use cases with heterogeneus hardware and complex interfaces (actual screen, peripherals) it didn't work.
XROS was a Fuschia fork, actually.
The reviews I've seen of its stability and usefulness have not been good.
That was also frustrating in that sel4 was right there. Why not invest efforts in the existing thing?
1 reply →
Yup it’s because they had to rewrite the drivers I presume. Always going to be the biggest issue with any new OS.
What if instead of writing the entire OS, a company were to pick up an existing “hobby” OS and refine it?
For example any of the systems listed in Carmack’s post. Or perhaps Serenity OS, RedoxOS, etc.
In that case, why wouldn't they "just" fork Linux? Or 10-years-ago-Linux?
The technical justification for Meta writing their own OS is that they'd get to make design decisions that suited them at a very deep level, not that they could do the work equivalent of contributing a few drivers to an existing choice.
How is that different than what they did? Meta stuff is on Linux. PlayStation and Nintendo on bsd, etc.
If you mean exotic ones then the answer is the parts that are written are the easy parts and getting support for hardware and software is hard.
ACPU OS is also good for that https://www.acpul.org/blog/so-fast
> I wish I could drop (so many of) my old internal posts publicly, since I don’t really have the incentive to relitigate the arguments today – they were carefully considered and prescient. They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
Carmack being Carmack, I'm sure the HR report came to nothing but it's just another reminder of the annoyances I don't miss about working at a BigCo. In the end, it doesn't matter that it went nowhere, that he was right or that it was an over-reaction and likely a defensive move in inter-group politics Carmack wasn't even playing - it just slowly saps your emotional energy to care about doing the right things in the right ways.
My first month at Amazon someone reported me for laughing at them…I didn’t even know they existed, on the other end of the open floor. I was laughing at something completely unrelated.
That made me really think about how fragile and toxic people can be.
Another Amazonian almost got fired for reacting with a monkey covering eyes emoji to a post shared by a black person (no malintent, of course, just an innocent “mistake” most normal people wouldn’t even think twice about).
You don't report someone like Carmack to HR, it will only backfire and make your ignorance even more visible.
Also, I am not surprised he was reported -- typical underhanded political hustling commonplace at Meta.
Report whoever you want to HR people. Don't listen to this guy ^
2 replies →
Jonathan Blow is the world’s most successful hobbyist programmer. His whole thing is doing projects from scratch. Every game he made could be done in Unity with far less effort.
Most opinions of this man exists in a vacuum space isolated from the real world software industry. Building an OS from scratch is one of those examples.
It’s never seems like there’s a significant reason behind them other than………”I made dat :P”
As an outsider...his games just look and feel different. They feel like bones-deep art, in a way that even the best of the best games (say, Hades) don't. Since Blow's games are puzzle games they're not even my favorite games! But the effort spent on making them exactly the way he wants them pays off.
It is genuinely ridiculous to say that the witness could “have been made in Unity with far less effort”. It’s easy to forget that people on this and ever forum love to just say stuff for the sake of having said something until you encounter a topic with which you are extremely familiar.
I don't think unity was as polished when braid came out in ~2008 that can also easily rewind time on low end Xbox hardware. The witness maybe in unreal? But there are some wild things there I've never seen an unreal game do that the witness does do
He got the right to be acknowledged by his peers for the work he has made at GDC, and anyone can make games with Unity, just like everyone can make a novel with Word, now making one without pre made tooling, that is a skill on itself.
Why is such a meme among gamers about Unity and Unreal based games?
Exactly because so many make so little effort it is clear where the game is coming from.
If he is selling his games, is he a hobbyist?
Sigh. It's really depressing how a technical discussion on merits of a solution keeps getting reported to HR. I've seen it many times.
Someone said your preferred design won't work, and you go to HR.
I gladly throw my idea under the bus when I hear why it's bad.
Now offering any critique of a thing in order to help the company comes with a career risk.
Is the difficulty in theoretical complexity of operating systems, or in project scoping/scope creep?
It's probably not that hard to write bare metal code for a modern CPU that runs and crashes. It's obviously insurmountably hard to compete with Android in features with scratch built bare metal code. An "OS" can be anything between the two. But it's very easy to imagine an "XR OS" project snowballing quickly into the latter, and Carmack's concerns would be spot on(as always is, and as proven). Is it then an inherent difficulty in "designing a new operating system", or is it technically something else?
XROS definitely suffered from scope creep and an outrageous set of goals/priorities.
the thread lists examples like 3rd party software
Why bother making a new OS when you can make a new user interface for an existing OS?
The drivers are the hard part. It takes a lot of inter-industry collaboration to get driver compatibility
People are nailing it here -- it's not the "OS" per se (heck, look at CP/M or original unix and this gives you a floor) -- it's the drivers and the required Standard Pieces: DMA, memory protection, TCP/IP, BLE, WiFi, Ethernet, GPU, USB (+ vast USB drivers), etc. Lotta 'standard' software must work on any new OS + drivers for all that hardware that is always changing. Great 'fun' .... Now, I do think Minix had a chance for a very short window, but lack of resources made sure it's now only an OS history footnote. All because a microkernel sucks 3% performance away from monolithic kernels. We are confused as an Industry... Performance uber alles -- that has just GOT to get fixed. Security; maintainability; simplicity > performance; or rather, those are worth some performance degradation.
> To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers.
I mean, I'd give a fair shake to an OS from the SQLite team [1].
1. https://sqlite.org/codeofethics.html
Where do I apply to join this monastic order of OS programmers?
Just walk up to the gate of your nearest Concent next Apert and they will take you right in!
Actually, I don't know how you join the Ita now that you mention it.
1 reply →
I'd love a truly new OS, but I just don’t know what it would look like at this point? "New OS" ideas tend to converge on the same trunk.
Building a hobby OS taught me how little is just "software". The CPU sets the rules. Page tables exist because the MMU says so. Syscalls are privilege flips. Task switches are register loads and TLB churn. Drivers are interrupt choreography. The OS to me is just policy wrapped around fixed machinery.
I think any OS can be divided into a "backend" that deals with the hardware and a "frontend" user-level applications with a UI. The backend is mostly similar everywhere, while the frontend is what the general public typically perceives as the "OS". It's hard to see anything truly new in the "invisible" backend, but the frontend changes with every update (Windows, Mac, Linux etc). ACPU OS is a good example of this, where the backend can be a different OS, an emulator or actual hardware, while the frontend remains the same across all execution environments. https://www.acpul.org/blog/so-fast
try to quickly spawn a lot of processes on windows
To be fair, monastic order of engineers is absolutely what the world could use about now.
The XROS thing sounds sort of like the PenPoint OS -- which was used with the EO 440 and EO 880 tablet + Cellphone-connected computers that came out around the same time as Newton (early 90s) - but with larger screens and cellular voice/data/fax connectivity (optional). Their tagline was "The Pen is the Point". Besides having a WACOM tablet as the pen-input device (requiring a driver), and baking in the notion that (at that time was true): connectivity was sporadic, and therefore you had to be opportunistic when you got a reliable cell signal (or were plugged into a phone jack). Those two ideas sure as heck did not require a whole new OS to support. But PenPoint built a company to market said OS. https://en.wikipedia.org/wiki/PenPoint_OS?useskin=vector Interestingly, this company ended up being folded into EO itself, as there seemed to be no market for a pen-based OS.
The EOs https://en.wikipedia.org/wiki/EO_Personal_Communicator?usesk... used the AT&T Hobbit chipset, which was a descendant from the CRISP architecture. https://dl.acm.org/doi/pdf/10.1145/30350.30385 by Dave Ditzel et al. The architecture was informed by examining millions of lines of unix C code; the arch was an attempt to execute C code well. https://en.wikipedia.org/wiki/AT%26T_Hobbit?useskin=vector It was a beautiful overall design. The design focused on fast instruction decoding, indexed array access, and procedure calls. The 32-bit architecture of Hobbit was well-suited to portable computing, and almost ended up in the Apple Newton. The manual is possibly worth a peruse: http://www.bitsavers.org/components/att/hobbit/ATT92010_Hobb...
There is another doomed project that XROS reminds me of: the Apple "Pink" OS. Brief history: https://lowendmac.com/2014/pink-apples-first-stab-at-a-moder... "Pink was spun out as Taligent. The kernel was jettisoned. Taligent would run on top of an operating system and act as an object oriented system (like OpenStep). It was released in 1995, but it sold poorly. It was canceled altogether in 1998." https://en.wikipedia.org/wiki/Taligent?useskin=vector more history http://www.roughlydrafted.com/RD/Q4.06/36A61A87-064B-470D-88... After Apple tried Pink, Taligent, and Copeland they .... ended up using Mach / FreeBSD and some pieces from other BSDs (as I understand it). Today, we have Windows and Unix of some flavor in the main. I think Geordi LaForge was using one of these OSs on his Warp Drive computers...
The engineers were right. “If not us then who”.
Meta has the talent and the balance sheet to pull this off. Worst case scenario we end up with one more open sourced operating system. Who knows what happens 20 years down the line.
> They also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
Sigh... Usual company politics.
No matter how much money you pour with top talents, code quality, documents etc, developing a custom OS doesn't make sense.
Been there, seen that. I faced a similar situation at one company. They failed on custom Not-Invented-Here syndrome derived implementation. My technically correct skepticism was criticized for decreasing moral of the team working on it.
I love this part: "To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers"
Whom the gods would destroy, they first persuade to design an OS :)
I've been developing a solo ACPU OS for many years now, one that's fast and simple enough to be better than any known OS. That's why I believe all OS development problems come from overengineering and overcapitalization. https://www.acpul.org/blog/so-fast
This is completely right from a product point of view, which is Carmack's argument.
But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative. Yes, there wouldn't be any ROI for years, and yes, the first several products on the platform would probably be better off on something more traditional.
But the long term value could potentially be astronomical.
Just another case of quarterly-report-driven decision making, I suppose. Sigh.
> But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative.
See Google's Fuschia: https://en.wikipedia.org/wiki/Fuchsia_(operating_system)
> But the long term value could potentially be astronomical.
Such as what?
> Such as what?
Historically? The internet, the concept of a graphical user interface, the mouse, the smartphone, the LCD display, the laser printer...
It's about clever people trying weird stuff, and occasionally ending up with a world-changing idea. Asking for examples of to-be-discovered innovations is, by definition, an impossibility.
1 reply →
How does it pay off in the long run?
If you're competing against nothing, then I see it: it opens up a wide variety of product possibilities. But linux exists. Why not spend 1/1000th the time to adapt linux?
That's not even counting the rather substantial risk that your new OS will never approach the capabilities of linux, and may very well never become generally usable at all.
Option A: spend years and millions on a project that may never be as good as existing solutions, diverting attention and resources from actual products, or...
Option B: work on products now, using an existing, high-quality, extensible, adaptable OS, with very friendly licensing terms, for which numerous experts exist, with a proven track record of maintenance, a working plan for sustainability, a large & healthy developer community exists, etc.
It's hard to imagine how it wouldn't be a complete waste of time.
Apple bought one of those in the 90s, and they are still reaping the benefits of that strategic initiative. But the thing is, NeXt allowed Apple to think up new, differentiated products. If you come at the problem of the OS from a purely technical perspective, you'll waste time for no gain.
Microsoft had Singularity - canceled after 12 years in development
Google has Fuchsia - is about 10 years in development. Recently was a target for layoffs
This is what Google has been trying to do with Fuchsia and the fact is that you can't escape the product point of view because the products exist, already have an OS stack, and get pretty defensive when another team tells them they're going to replace their OS, or their core if the product team is Android or Chrome OS.
How would that be better than just grabbing a bsd and starting with that, PlayStation and Apple did it and actually ended up with functional products
Hell, if you're Meta you could just by QNX from RIM.
1 reply →
Apple surely did not, as NeXTSTEP wasn't invented at Apple.
But I have wondered why one of these companies with billions of dollars to burn hasn't tried to create something new as a strategic initiative.
They have; Taligent comes to mind. You may not have heard of that -- or more likely, you have but just forgot about it -- but it's a good object lesson (no pun intended) in why a successful new OS is hard to just conjure into existence. There has to be a crying, desperate need for it, not just a vague sense that This Time We'll Get It Right.
You could probably cite OS/2 Warp as a less-obscure example of the same phenomenon.
Western companies haven't thought about long term value in decades
While I appreciate Carmack and all, I'd love to hear from someone like Dave Cutler who's been over that bridge at least a couple of times successfully about if and what he'd do if he had resources to create whatever the hell he wants.
Another example of a new OS developed by a vendor is DryOS by Canon [0] as a replacement for WindRiver VxWorks[1]. It has been extensively explore by the chdk community of custom software extensions for Canon cameras. It appears to have some compatibility with Linux in some form.
In my non-expert mind, an OS for "foveated rendering" would be similar to what many cameras prioritize and more likely be similar to an "realtime OS" of some sort. OTOH, Apple's goggles use the XNU kernel, so maybe a microkernel would be sufficiently realtime, similar to QNX often used for automotive applications [4].
0. https://web.archive.org/web/20190214134247/http://www.canon....
1. https://www.windriver.com/
2. https://chdk.fandom.com/wiki/For_Developers
3. https://github.com/apple-oss-distributions/xnu
4. https://en.wikipedia.org/wiki/QNX
Microkernels are normally pretty slow. It takes a lot of extra effort to make them fast.
Realistically there's no reason Linux wouldn't be fine on its own for AR and in fact I'm typing this on Linux on some AR glasses right now.
But what should you be running on an XR headset? The OS has to be real time. Linux can sort of do that. Probably a stripped down Linux. About 90% of Linux is irrelevant or undesirable in this application.
Unless you’re designing the silicon yourself, stripping user space from Linux is several orders of magnitude easier than writing new device drivers for your brand new OS.
When you're at a certain scale it makes sense.
That scale is when creating an OS gives you a clear advantage over licensing or working with an open source OS.
Every other scale below that it's for knowledge, growth, research, or fun.
> To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers. Which was sort of Plan 9…
Roll call!
I wonder whether an unarticulated desire of Meta's was to avoid any license issues that could incur by using Linux or any other existing OS.
Where can I find Jonathan Blows “Why can’t we even conceive of writing a new OS today” post? No luck when searching for it
https://x.com/Jonathan_Blow/status/1954574841547464739
I'm surprised at the comments here. Linux's days as the sole hegemon are numbered.
How about ReactOS?
Brace! Here comes “I said don’t build it, but they did anyway” comments. They should have did what you said, you were right, don’t worry
And yet, Sony did it, Nintendo did it, both have been pretty succeesful.
We also need to be clear what an OS is. Is it "darwin" or "macOS" - they have different scopes.
Things I'd want from an OS for an XR device.
1. Fast boot. I don't want to have to wait 2-3-4-5 minutes to reboot for those times I need to reboot.
I feel like Nintendo figured this out? It updates the OS in the background somehow and reboot is nearly instant.
2. Zero jank. I'm on XR, if the OS janks in any way people will get sick AND perceive the product as sucking. At least I do. iOS is smooth, Androind is jank AF.
Do any of the existing OSes provide this? Sure, maybe take an existing OS an modify it, assuming you can.
You mean for the PlayStation? That is a FreeBSD fork, probably chosen over Linux because of the license.
Nintendo is an interesting example though. According to Wikipedia they actually use a proprietary microkernel, which, if I'm reading this right, I think they developed themselves. Looks like the only open source components they have is some networking code which is published under the BSD license.
https://en.m.wikipedia.org/wiki/Nintendo_Switch_system_softw...
1 reply →
Sony and Nintendo both forked off of either NetBSD or FreeBSD. Sony's cameras at least up until the A7S2 run Linux (there's jailbreaks for these), although I never found any kernel / bootloader dump like it would be required.
Android suffers from being Java at the core, with all the baggage that brings with it.
Sony forked FreeBSD, but Nintendo didn't. They have BSD license headers because of some BSD socket code they include.
Sony cameras all run Linux. Models with "PlayMemories Camera Apps" feature like A7M2 support runs Android userland on top. It's probably easier to count the cameras that don't(like old Olympus).
PS4/5 was BSD. Only Nintendo and Microsoft runs own corporate proprietary OS.
ChromeOS, which is based on linux has both fast boot and zero jank.
I like that the top reply to Carmack's wall of text is a screenshot of TempleOS with a doodle of an elephant lmao. And ironically, that meme reply is on topic and it says a thousand words with just one photo.
Another point I would add in support of that meme comment, is Google's recent rug-pull of Android not allowing sideloading apps from unsigned developers anymore starting this autumn, after over a decade of conquering the market with their "go with us, we're the open alternative to iOS" marketing.
The conclusion is to just never EVER trust big-tech/VC/PE companies, even when they do nice things, since they're 100% just playing the long game, getting buddy-buddy with you waiting till they smothered the competition with their warchest, and then the inevitable rug-pull comes once you're tied to their ecosystem and you have nowhere else to go.
Avoid these scumbags, go FOSS form the start, go TempleOS. /s but not really
I'm not sure Carmack's point disagrees with you. Meta is still big tech, and if your goal is to monetize at scale, rolling out your own isn't the most efficient way to do it. I don't think he'd discourage you rolling out your own OS if it's your hobby FOSS project.
In other words, unless God has specifically called upon you to build an OS, and maybe provided divine inspiration and assistance, you should avoid doing that. Seems to support Carmack's point!
Unless its for love or devotion, there's no compelling reason to create a new OS in 2025. Certainly that could change in the future but I think his observation (if I understand him correctly) is correct.
Can you explain the TempleOS meme reply?
I don’t know enough about its history to get the joke.
Just google/youtube the history of temple OS and its creator. It's fun, sad and tragic at the same time.
Spoiler alert: a single person coded it in his own programming language, but the person suffered from severe mental illnesses and ended up taking their own life.
T
I'd love to hear John's more detailed take on TempleOS
The problem with this guy is that it’s hard to criticize him, whether at work or in this forum. For example, I am going to be downvoted for mocking the fact that this guy thinks it’s some genius move to say “No” to making an operating system, whatever making an operating system means.
You can prove him wrong by doing thing what he claims should not be done.
See? This is why the Doom man got paid the big bucks. The Doom man has a literacy distortion field.
The fact that Facebook, a company far richer than Bell labs ever was, like all FAANG, has a culture of expensive employees baby sitting broken software products (rather than lab researcher vs field technician separation), and cannot be bothered to do the long term investment in a new OS, is why I think the industry actually doesn't deserve these R&D tax breaks HN was bemoaning had gone away until this year.
The point of R&D is the time horizon is long, and the uncertainty is high. Making JS slop that then has to be constantly babysat is opex, not capex.
The problem when working for Meta is that if you do a good job, you've helped make the world worse... so the real heroes are the people wasting money and reducing efficiency
If you're at all competent, go work somewhere else
One of the better "service to humanity" opportunities for software engineers is to join a company like Meta or TikTok and perform awfully for as long as you can.
Yeah I'm making the world a better place by earning 500k a year doing a bad job to slow down this company. Look at how much good I am doing sorry I can't hear you over my paycheck clearing
3 replies →
The best way to serve humanity in your professional life is serve humanity in your professional life.
In other words, be useful. You don't have to worry about "being good" or "doing good" though many do and it's quite admirable to do so. But that's not the bar you have to clear.
The bar you should try to clear is to be useful. If what you're doing all day is helping people have shelter, or raise families, or be more healthy, or have more knowledge, or even be entertained or amused, you're being useful to people.
If what you do all day ultimately serves to make people poorer, more divided, more addicted, and more unwell, then what you're doing is not useful, it's harmful.
If what you're doing all day primarily contributes, even indirectly, to making people's lives worse, then nothing you do after that will erase it. Arguments to the contrary are just rationalization.
I think a better service to humanity is to excel at your job even if you end up at a socially corrosive org like Meta or Tiktok but donate a decent chunk of your paycheck to effective altruist charities that save lives.
9 replies →
That’s an extremely reductive view.
Whatever you think of Meta core products, they pay a ton of people to work on various open source projects, do R&D on things which are only tangentially related to social media like VR or data center tech.
There is worse way to get a paycheck to do what you are interested in.
> tangentially related to social media like VR
This is in no way tangential.
VR is Meta's way of trying to move social media from web to VR in a Second Life way.
And you can believe me that there will be advertisement in the "game".
9 replies →
The core product is somewhat relevant though
That you can get paid and have fun doing it, doesn't make the product better.
"It wasn't all bad. They built the Autobahn"
4 replies →
I'd take zstd any time while I have facebook and friends blocked. The world is not black and white.
zstd can't really be attributed to Facebook. Yann Collet started work on it before joining Facebook, so it was kind of imported.
I am sure it made developing and standardizing the algorithm easier, but what makes it such a good (performant) algorithm is the design of the original creator.
1 reply →
Quite pessimistic view, but hard to argue against based on available data samples.
But isn't that true for every big corp, or even every public company? Even if founders may had some other goals in addition to making money, as the time passes profit becomes the only goal, and usually more profit is being generated while doing bad and malicious things.
Problem is systemic.
There are lots of profit motivated big companies that cause much less collateral damage. Facebook ranges from individualised harm like showing kids makeup ads when they delete a selfie, to macro scale harm like election interference
You could take a job designing landmines and you'd have a real hard time causing as much actual harm, as there just aren't enough wars going on to reach the same scale
Nokia (mostly networking-related things nowadays) touts - or at least used to, haven't kept up to date - itself as one of the most ethical companies around.
> But isn't that true for every big corp, or even every public company?
So I suppose not really, no.
Additionally companies working on carbon-free energy might also serve as evidence. There are some big ones around.
5 replies →
I think I can say that this wasn't the case with Sun Microsystems. I never worked there but everything I read on tht company was positive. I gate the fact that Oracle (one of the worst) bought them.
1 reply →
Depending on the founder. With Apple it can be reasoned that it only went down after you know who passed away.
Yeah its not reliable to count on one charismatic leader to run the whole thing, but that is what the corporate model has being doing and how we ended up here.
1 reply →
But Meta’s connecting the world… by keeping them inside doomscrolling.
Providing a service that billions of people value is making the world worse? Wow
What next, go work for TV stations and sabotage them?
Go work for McDonalds and make it inneficient?
Sabotage manufacturers of combustion-cars?
Providing fentanyl to addicts is doing God's work then I imagine?
2 replies →
Facebook seems to have strange relationship with most Americans while the rest of the world is quite happy with it. Including both WhatsApp and Instagram.
Hmm? What value? And for whom?
I find this kind of comment revolting - if I owe something, I owe it to my family and my parents, so if Meta comes to make me an offer and I accept it, it's my business and no one else's. Strangers on the internet, instead of judging people based on the company they work for, and divide them into "good" and "bad", should get off their high horses and join these companies, if they are capable, and change them from the inside if they think they are doing bad things.
Or take almost any other job.
If you can get a well paying job at meta, you have other options.
john carmack maybe old but his competency/talent is 100x times bigger than you OP
dont shit talk my goat like that
Ah, the Europeans woke up.
Lost me and TempleOS
https://xcancel.com/ID_AA_Carmack/status/1961172409920491849
Didn't even realize this was a thread and not a single tweet until you posted this link. Guess that's the downsides of not having a Twitter acct anymore.
We'll put that link in the top text too. Thanks!
[flagged]
The platform is still usable, just block anyone who posts politics in your timeline, eventually it all becomes technical stuff.
1 reply →
Thats how conservatives feel on many other online platforms with far left views.
3 replies →
The Twitter algorithm is open source, unlike the algorithm for Facebook, Instagram, TikTok etc. I'm not aware of any evidence for bias in the algorithm.
8 replies →
Neat website, thanks for posting. Basically necessary to avoid the twitter “paywall”
Did they have people who have built an OS before?
I’ve seen this firsthand. These giant tech companies try to just jump into a massive new project thinking that because they have built such an impressive website and have so much experience at scale they should just be able to handle building a new OS.
In my case it wasn’t even a new OS it was just building around an existing platform and even that was massively problematic.
The companies that build them from scratch have had it as one of their core competencies pretty much from the start.
I’m unsurprised meta had issues like this.
> Did they have people who have built an OS before?
Yes.
They have contributors to the linux kernel. Pretty sure all the big tech companies have the right people to create a new OS that is better than Linux, the hard part is getting that new OS to be adopted.
[dead]
[dead]
[dead]
I lost a lot of respect for carmack when he joined meta.
The company is a black hole of wasted talent.
He's acting like their VR UX is top notch when it's as bad as it gets. Just yesterday I dusted off my Meta Quest 2 to play a bit, and spent around an hour trying to pair up my left controller to the helmet after replacing the battery.
You can't do it without going through their fucking app, that asks for every permissions under the sun, including GPS positioning for some reason. After finally getting this app working and pairing it with my headset, I could finally realize the controller was just dead and their was nothing to do.
You can pair the controllers in the settings you don't need an app. Their VR UX does suck that is true, and horizon worlds is such a collosal failure that I'm surprised they haven't cancelled that entirely yet. But carmack also stated the technical issues numerous times.
No you can't? By their own documentation, you have to use the app: https://www.meta.com/fr-fr/help/quest/967070027432609/
And my quick search on the internet yielded no other mean to pair controllers.
1 reply →
> You can't do it without going through their fucking app, that asks for every permissions under the sun, including GPS positioning for some reason.
If it uses bluetooth, which it might for the controller?, the permission for bluetooth on Android is fine location --- the same permission as for using GPS. That might be the same permission you need for wifi stuff, too? Because products and services exist to turn bluetooth and wifi mac addresses seen into a fine location.
But who knows what they do with the GPS signal after they ask for it?
No, it doesn't use Bluetooth. Or maybe it does under the hood but the permissions they ask for are GPS and "see nearby devices". You are able to pair your device with Bluetooth disabled in the phone's quick menu.
> they also got me reported to HR by the manager of the XROS effort for supposedly making his team members feel bad
This is madness. The safe space culture has really gone too far.
I'll offer a different interpretation:
If a professional can't give critical feedback in a professional setting without being rude or belittling others, then they need to improve their communication skills.
This is not that though. This is just developers being unable to handle constructive criticism, and when they can't win the argument on merits, went for the HR option. It happens.
I've had it happen to me too, but my response was to resign on the spot (I was already not satisfied with the company).
The "toxic behaviour" I had done? I reverted a commit on the master branch that didn't compile, and sent a slack to the Dev who had committed it saying "hi! There appears to have been a mistake in your latest commit, could you please check it out and fix it? I've reverted it in the meantime since I need to deploy this other feature"
The dev responded by force pushing the code that did not compile to master and contacted HR.
I decided there was greener grass on other pastures. I was right.
12 replies →
Having worked in the valley, I've seen what critical feedback meant in many companies there, and it removes all usefulness of the info because there is a ceiling of what is socially acceptable to say; therefore, you can't know how bad or urgent things are.
Everything is ASAP. They are super excited about everything. And nothing you do is wrong, it just could be improved or they like it but don't love it.
You don't know if something is important, basically.
Just like Louis CK said, "if you used 'amazing' on chicken nuggets, what are you going to say when your first child is born?". But in reverse.
Personally, I'd rather work with someone who would tell me my work is terrible if it is.
In Germany, you can't even legally say somebody did a bad job at your company in a recommendation letter. Companies created a whole subtext to workaround that, it's crazy.
Some things are just bad. You should be able to say it is. Not by saying it could be better. Not by using euphemism. It's just something that needs to go to the trash.
In fact, I don't trust people who can't receive this information, even if not packaged with tact (which you should attempt to, but life happens). If you can't handle people not being perfectly polite every time, I can't help but feel I won't be able to count on you when things get hard.
That must be the French in me talking.
4 replies →
This.
Being "reported to HR" doesn't mean "almost got fired". It likely meant a meeting where someone explained "hey, the way you communicated that caused some upset, let's discuss better ways to handle that situation next time." Very often in larger companies, complaints about things like "this bigwig from this other group jumped all over us" are automatically sent through HR because HR has staff whose job just is resolving conflicts between people and keeping things peaceful.
1 reply →
From what you know of Carmack, does "can't give critical feedback in a professional setting without being rude or belittling others" sound like him to you? It does not to me, though granted maybe he's different in his non public persona than what you can see in presentations and talks.
1 reply →
You've concluded this from a single, brief, throwaway line? Any madness you perceive about this situation has been fabricated by you, based on the details we have.
People have been getting mad at being made to feel bad at work for much longer than “safe space culture” has existed. If someone or some team had more power than you at an organization you for sure will get reprimanded for making them feel bad.
Reading between the lines, it sounds like he got reported for giving a lot of what might kindly be described as unsolicited advice. The guy left Meta ages ago, but he apparently still can't let this one go.
If you're in the middle of trying to write a new operating system, then it's probably not helpful to have John Carmack standing over you repeatedly telling you that you shouldn't be doing it. In this case Carmack gets the last laugh. Then again, it is easy to get the last laugh by predicting that a project will fail, given that most projects do.
> unsolicited advice
He was the CTO of Oculus. Surely it is appropriate for the CTO to give advice on any big technical decisions, if not outright have veto power.
1 reply →
When a veteran tells you something and is passionate about it, maybe it is worth listening or at least dealing with internally. At the end, he left anyway even if the project didn't fail and Meta remains wealthy but largely mediocre in terms of the products it delivers while relying heavily on startup acquisition and large spending. Pretty sure most people who work there only do so for premium rent-seeking.
None of it surprising if this is a signal of how they operate.
> If you're in the middle of trying to write a new operating system, then it's probably not helpful to have John Carmack standing over you repeatedly telling you that you shouldn't be doing it. In this case Carmack gets the last laugh. Then again, it is easy to get the last laugh by predicting that a project will fail, given that most projects do.
I mean, if you're working on a project that is likely to fail, wouldn't it be nice if someone gave you cover to stop working on it, and then you could figure out something else to do that might not fail? Can't get any impact if your OS will never ship.
3 replies →
Sometimes you have to let people fail, even though you can see it coming. It sounds like Carmack was sticking his nose in a project that wasn’t under his purview and he dug his heels in a bit too much when he should have just let it fail.
All the FAANG do dumb shit all the time and waste huge sums of money, if you work at a FAANG the best thing you can do is stay in your lane and don’t do dumb shit — eventually it will shake out.
I have been bullied around by L7s (as a L5) sticking their nose in things, and the best thing you can do is clearly articulate what you are doing and why, and that you understand their feedback. Turns out the L7 got canned — partially due to their bullying — and I got promoted for executing and being a supportive teammate, so things worked out in the end.
A meeting with HR is not madness. No one got maimed or died, or even lost work, seemingly. Some people exchanged words.
Cool off.
It got mentioned for a reason. And obviously escalating with HR is a big deal as it comes with career risks for the person you are reporting. Risking someone else's career should be a last resort but seems to be more commonly a knee-jerk reaction with HR becoming weaponised.
The drawback of this is you lose good talent and keep rent-seekers instead.
3 replies →
The only reason you want me to "cool off", is because you feel bad just interacting with somebody expressing a polite, strong opinion. Online. On the other side of the world. With text.
This is exactly the madness I'm talking about.
Case in point.
1 reply →
Something tells me that if we heard the other side of the story it might hit different. There's a lot of wiggle room in what "making his team members feel bad" could mean, and I would be surprised if constructively voiced criticism would have gotten someone written up.
With my experience of being written up for constructive criticism the reasoning was that I didn’t give constructive criticism to others and they felt singled out. I only give such criticism in private so of course they were not there to see the others. Apparently that wasn’t a sufficient explanation.
it is madness, you would be surprised how many ppl take things too serious. been there, had talk with HR cause i've said that the solution is mediocre and we have to do something better than that.
Does SteamOS count as something Carmack would discourage as well? Yes it's a Linux-based system and yes even based on an existing distro, but it is a purpose-specific OS and it seems like it's working well for Valve and people using it to play Windows games without Windows...
End of the tweet:
> I can only really see a new general purpose OS arriving due to essentially sacrificing a highly successful product’s optimality to the goal of birthing the new OS
tbh linux has quite a bit of cruft in it these days at the syscall and interface layer.
if youre apple, it does make sense to do stuff from scratch. i think in a way, software guys wind up building their own prisons. an api is created to solve problem X given world Y, but world Y+1 has a different set of problems - problems that may no longer be adequately addressed given the api invented for X.
people talk about "rewrite everything in rust" - I say, why stop there? lets go down to the metal. make every byte, every instruction, every syscall a commodity. imagine if we could go all the way back to bare metal programming, simply by virtue of the LLM auto-coding the bootloader, scheduler, process manager, all in-situ.
the software world is full of circularities like that. we went from Mainframe -> local -> mainframe, why not baremetal -> hosted -> baremetal?
Apple doesn't do a lot of baremetal development from scratch that I'm aware of. The Lightning to HDMI dongle bootstraps an XNU kernel with an AirPlay decoder into 256MB RAM, for instance.
Doesn't Apple pretty much own their whole stack? You mentioned XNU, which they made...
Their CPU, their boards, their firmware (presumably), their OS[1], much of the perhipherals are theirs, too.
Lots of companies try to emulate Apple, but it's very hard to pull off.
[1] Yes, they use some parts from Mach and FreeBSD in their OS, but the amalgamation is theirs, and they support and change the whole thing as needed.
Boot loader firmware, initially done in a Safe C dialect, nowadays one of the reasons Embedded Swift came to be.
You can still do “unsafe” stuff in rust, and people do. It’s perfectly possible to write safe C and C++ these days. And you don’t have to deal with a borrow checker, and a very small pool of developers available to hire.
> It’s perfectly possible to write safe C and C++ these days.
It's also very hard to do so.
1 reply →
oh, i didnt mean to invoke rust in any technical sense - i brought up rust to introduce an example of the attitude that rust people are known for, namely "why not rewrite everything?", which a lot of people have a kneejerk rejection of.