Agree with the author that UEFI is bad for security. You have this huge binary UEFI blob in a pre os boot environment that does not run open source. After the motherboard,laptop manufacturer looses interest and they loose interest as soon as the product does not sell more new products UEFI remains unpatched and insecure.
The boot loader should be simple and relatively dumb IMHO, then it is secure. If it should be bigger then it should be Open source.
Management processors like Intel ME built into the CPU, firmware another x86 insecurity.
UEFI is poorly understood by approximately everybody who doesn't work directly with it and it's frustrating to see so much misinformation out there.
UEFI does not mean that there is a huge binary blob that does not run open source. UEFI is a spec. It defines many steps that must be taken to boot in a compliant way. Large portions of the code that runs in UEFI compliant systems in the wild today are in fact based on an open source 'core' available on github. It is entirely possible to perform a UEFI boot on an entirely open firmware stack, though this tends not to be done. Large silicon vendors like to keep their silicon initialization code proprietary and secret, and they often 'require' tweaks to the open source version of the UEFI 'core' to meet their needs (read: it's seen as easier/cheaper/more-business-friendly to fork the open source core and sprinkle two or three changes throughout and keep the result closed), but there's no reason it needs to be that way.
The author is wrong - there is no 'UEFI kernel' running at any ring after boot. UEFI leaves some code and data in ordinary, OS-accessible memory which can be jumped to and run by the OS if desired to perform some UEFI-related task like setting a boot variable. This code is not protected or hidden or in a special ring and does not require any special steps to invoke. It just sits there waiting to be called and can be modified or deleted if the OS chooses to do so.
SMM is actually a special ring with its own privileges, but there is nothing baked into UEFI which requires using it or leaving code running there. UEFI is extensible and so some platform performing a UEFI boot can leverage a hardware feature like SMM to maintain some control over a platform, but that requires the firmware developers to go out of their way to do that and it can only be done on hardware equipped for it.
But you know what? UEFI is not at fault there. If a platform performed a UEFI boot without touching/configuring SMM at all, then the OS or the bootloader could do the same thing. The hardware capability exists and is accessible by ring 0 until somebody flips a switch to remove that accessibility.
Proprietary, insecure software is a problem. Making firmware fall into that hole is really bad. But UEFI doesn't make that happen. UEFI is just a way of booting that doesn't specifically disallow it, because it's designed to be flexible and extensible and powerful so that a lot of needs can be met. It's completely possible to put together a firmware image which is UEFI compliant and goes out of its way to disable SMM (or any other hardware feature), and boot to an OS that wipes UEFI traces from memory if it feels like it.
Something like Intel's ME existing as an option for businesses who want it is fine. Injecting it into every platform and making it roughly impossible to disable is not. Either way, UEFI is not implicated.
UEFI is not the bad guy. Those who ship UEFI compliant systems which happen to suck are the bad guys. They do it with UEFI, they did it before UEFI, and they would do it without UEFI.
UEFI is still a bad guy, because it is overengineered. And precisely because of that FastBoot mode was invented. Neither Windows, nor Linux (or whatever else) require 90% of UEFI features.
> Those who ship UEFI compliant systems which happen to suck are the bad guys.
This. The UEFI implementation on my XPS 13 9343 can't pass kernel command line, unfortunately. Ideally, I would have liked to boot straight into an EFISTUB kernel. Thankfully, there is rEFInd, which I boot as a secondary bootloader.
Well simple and dumb generally is good for all engineered things, as long as they aren't so dumb that they can't do their job.
This principle applies especially well to boot loaders, because the job of a bootloader is to hand control off to some other, more sophisticated piece software.
Unpatched UEFI is more about the reputation of the manufacturer/provider of UEFI. If a major manufacturer releases a motherboard, you can be sure (1) they are patched often and (2) they use similar components across many motherboards so bugs and vulnerabilities are patched across many simultaneously are worked out sooner
Patches existing for the firmware vulnerabilities of major manufacturers is good (I'll take your word for it, having not looked recently, but I know that a few years back this was not the case and known vulnerabilities could be found easily on shipping products).
The pathway from the patch existing to the patch being applied is overgrown with flammable brush. Infrequently traveled. Not healthy. There are efforts to fix this, but they don't have too much momentum at the moment.
1a. Often - that much is true; initially. 1b. For the lifetime of the product? Nope nope nope nope, not in my wildest dreams. What's wrong with a stable, well-built, functioning motherboard? Nothing, just that some years have passed and the mfg no longer has an incentive for support.
> Secure firmware is the foundation of secure systems. If we want to build slightly more secure systems they will require open, auditable and measured firmware. If we can’t read and audit the firmware code, we can’t reason about what is going on during the critical phases of the boot process; if we can’t modify and reproducibly build the firmware, we can’t fix vulnerabilities or tailor it to our needs; and if the firmware isn’t measured and attested, we can’t be certain that our system hasn’t been tampered with.
I agree- this is a very PC focused article but firmware is everywhere. It needs to be updateable and making it open makes it far easier to update.
As far as UEFI goes, I'd just like to point out Microsoft's open source Firmware efforts (Project Mu). https://microsoft.github.io/mu/ The goal is to make firmware easier to service and easier to update with security fixes for older projects.
While it's not perfect, it is a great step forward. I think we need to see more of this in the future from other companies.
(Disclaimer: I work for Microsoft and contribute to Project MU).
Reading up on Project MU now, and I'm sorry, what is this about firmware as a service? This seems like exactly what I don't want. XaaS (X as a Service) is great when there's something external that is only temporarily or optionally required, (or too expensive). Otherwise it's an ongoing dependency. But with firmware, once I own the hardware, I should own the firmware too.
Without knowing more about this specific Firmware as a Service I can only imagine how this will actually look. Maybe it just means that updates are automatic? Even that alone is an interesting debate.
Otherwise, Prohect MU looks like a modern wrapper around UEFI, what's being done to address the fundamental issues? Firmware code running after boot, modifications to firmware possible by changing code between boots, etc...
The only firmware code running after boot that UEFI mandates is not below ring 0 and is fully optional - called only if and when the OS asks for it. The UEFI runtime services table is not a kernel and is parked in ordinary memory waiting to be jumped to/called.
SMM is supported but not mandated, which is exactly how any hardware feature should be treated. Blame those enabling the SMM code you don't like. Or blame the hardware manufacturer for putting the feature in at all.
UEFI is not your enemy. Its only sin is being overly complicated, which is (somewhat) debatable given the complexity of systems and OSes needing to be bootable.
Yeah I agree- firmware as a service doesn't exactly capture what Project Mu is trying to do. But the point is, firmware should be easy to update as any service rather than some huge monolithic codebase that was forked the master (TianoCore) and then hammered on until the platform booted.
In a perfect world a product would ship with perfect bug free and secure firmware and it would never need updates. And ideally, manufactures would allow for users to more easily install their own UEFI/firmware onto their device but that brings in some added challenges of security.
Since developers make mistakes, updates are currently the best solution we have. Making those updates more affordable to service, making the changes transparent to the end-users via OSS, and making it easier to apply those updates are all things Project Mu is trying to accomplish.
IIRC Microsoft was instrumental with a few other companies in developing the rather arcane and overbearing ACPI standard in the 90's that continues to make it difficult for non-Windows operating systems to reliably work with a laptop's hardware even today.
I've played with U-Boot on ARM platforms. It's a breath of fresh air. It loads the OS and then just gets out of the way. This is what simplified PC firmware should be.
While there's a lot of MU I could take or leave and some I don't really see the value of, I greatly enjoy that there's finally at least one effort out there to wrangle the build system. Having not used MU's build yet, I can't say if you've succeeded, but I applaud the effort since current "standard" build processes and scripts and wrappers in use are.... not good.
I wish the smartphone side of things gets more attention and someone engineers their way to an open source baseband firmware ala OpenWrt [0]. Not sure why after the breakthrough for GSM/2G with OsocomBB [1], no viable libre alternative LTE or 4G has emerged [2].
The smartphone is an always-on, always-connected computer with 2B or more (30% of world's population) unsuspecting (to almost a point of being gullible) BSD/Linux users who are exposed and don't even know it, to an unprecedented degree, to adversaries with deep pockets (ad-networks [3], nation-states [4], carriers [5]) who don't need a second invitation. Most of the privacy and security battles, I feel, will be won and lost with smartphones. That's discounting IoT security altogether which is a scary proposition in itself for rather silly reasons [6][7].
This is a very PC centric article, but the same can be said about any connected device--from cars, to baby monitors, to buttplugs. If it has an internet connection, the firmware should either be open source or in open source escrow, so that if the company dies or decides to not support their device anymore, the hardware itself can continue to live.
The patch needs to be provided by phone and tablet manufacturers. Except that many otherwise capable phones are not supported anymore and will not be fixed.
Were the firmware of these devices open source, the community could fix this (given that the firmware does not have to be signed, or a signing key can be added). But no, many devices will remain forever vulnerable.
Including my phone, 4G RAM, 32G internal storage, excellent battery and screen, great computing capabilities, in a excellent physical shape. Will probably last a few years more. Last updated on November 2017 by its manufacturer. Some parts will never be updated again and there is no way to audit this stuff.
This is a shame.
Edit: and I'm lucky my phone resembles an Android One phone, so some stuff can be taken from this phone to update mine.
Perhaps software in general shouldn't be provided _as is_ anymore. The idea that someone provides a software and it's your problem if it doesn't work is really... _too easy_.
Company A sells you a cell phone. In a reasonable time (5 years? 10?) a flaw is found. You, the costumer, can fix it? No, because it depends on proprietary code, a key, some DRM, whatever.
So company A should fix it or be accountable for the problem. Being sued, paying for it. Or open the hardware so that user can fix it.
Source escrow is an interesting concept. My initial thought it that would create huge perverse incentives. A company might continue to release small, nonsense updates to products they otherwise don't care about, just to avoid giving the source away. Meanwhile, as the owner of a device, I would be eagerly awaiting the death of the company, so I can get my hands on that juicy source code. I certainly wouldn't recommend their products! Anything to make them die more quickly.
It gets even more interesting when the "secret sauce" is in software. Say I market "SmartButton 1.0", which is nothing more than an ESP8266 connected to a button, but with some cunning algorithms and proprietary protocols that make it useful. Under an escrow system, I'll have a competitor on my hands the second I stop supporting SmartButton 1.0. Even if I'm already on SmartButton v19.
The SmartButton scenario is exactly the incentive we want, isn't it?
If you have made genuine improvements in versions 2-19, releasing version 1 shouldn't hurt too much. If, on the other hand, your versions 1 and 19 are still substantially similar, you shouldn't stop supporting version 1 to save a small cost and completely destroy the value of the product for your customers.
One of the biggest problems is the lower-level chip vendors, who often require NDAs and won't allow their code to be shared publicly. The device maker has to comply with this or find another chip, which may not be available in sufficient quantities or at a realistic price point. The chip vendors don't necessarily go out of business, even if the device maker does.
Considering the global impact on security, this is an area that would make sense for regulation. At some point, the chip vendors should have to release their code to maintainers. I'd even be fine with limiting this to after the chip goes EOL! Perhaps it could come with guarantees reducing patent infringement risks, which may be where much of the vendor reluctance comes from.
Although mentioned in the article, I would like to emphasize that https://puri.sm are selling laptops with disabled and cleaned (with me_cleaner) bios. I hope more companies follow.
Purism seems to be rather inept, they shat the bed with their recent Librem One product launch, whereby they rebranded Tusky and disabled all moderation tools on their Mastodon instance, then were suprised when their employees started quiting due to this ridiculous behaviour.
Purism doesn't seem to want to invest in the software that makes their services work, hence the commentary by Matrix devs on which services are helping to push development forward (and thus deserve subscribers).
> Chromebooks use both, coreboot on x86, and u-boot for the rest.
This isn't entirely true. Coreboot is used on a number of ARM Chromebooks, including rk3288 and rk3399 based devices. It seems like u-boot is used less and less in the space. Libreboot has builds for a few devices that kill the annoying "untrusted os" message, and even allow you to set your own trust root.
I'm not sure -- I haven't messed with a Pixelbook. On Chromebooks, the trust root lives in the bootloader's flash with no security beyond the write protect screw.
A quick googling led to a reddit post that indicates it's possible with the Pixelbook, but likely a PITA:
I am also perplexed why the Raptor Talos II does not receive more attention in the open hardware/software community. For me it is basically everything I've ever wanted from a libre system.
I have a completely honest question here that I'm hoping some people can answer. Is open source really more secure? My default answer would be yes absolutely but when I think about it I'm not sure I understand why.
If something is open source then bugs and security problems can be found more easily and then fixed. This sounds great to me and I'm sure that works out just fine most of the time. This makes me wonder though...are there really fewer intrusions into production systems that are built entirely on open source software than there are in ones built with lots of proprietary, closed source software? What does the data look like about this stuff?
I can't speak to the data analysis part, though I do believe some people have looked into it, and hopefully they can add their thoughts.
From my experience, the answer is: it depends very much on the community the project has.
First, the obvious positives: you could have lots of people with lots of different kinds of experience looking at the code, finding and fixing things.
This is how I got involved in Firebug back in the day. But I also noticed that while millions of developers used it daily, the number that got all the way to the issue reporter were small, and the number that posted fixes in an issue were minimal (I got to know them by name). Only once do I remember a security issue being reported, considering that extensions had such broad and unlimited access back in then.
So, if it does not invite that kind of community, then it is possible to be a net negative with only blackhats having a reason to inspect the code. OR, you have a social problem within the community (also common), where people assume that with such a large community, surely someone looked at X. Everyone thinks that, so no one looks at X. Years later someone does and finds some surprising things in code that withstood the test of time.
That said, I think the case of UEFI would be different. It might be a good candidate for shared source at least, if it isn't already.
>If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.
How many years did Heartbleed go unnoticed? How many exploits in open source software get reported here?
It's not true that someone reads all of the open source code every day. The truth is, few people ever read any of it, and fewer still have the domain expertise necessary to be able to spot and patch any obvious bug, much less subtle ones. And yet this metaphysical belief in the "many eyes" persists.
Sure, it exists, but there are supposed to be eyes on the proprietary code as well, and the effect is probably smaller than people think, with no one outside of a project's maintainers ever actually studying the code for most open source projects.
Open source software is just software. That is to say it is just as secure, or insecure, as any other available software.
The open source model, however, allows for incremental improvements, patching, security updates and auditing from the community that the typical closed source model neglects to provide.
I think the trend now is to believe that closed software that is actively maintained by a well resourced party is more secure than open software that is barely maintained by whoever contributes.
Binary blobs for hardware that has long shipped doesn't really fall into the "actively maintained" category. At least not reliably.
It's not more secure: it's just potentially easier to review for security. The lifecycle of software and quality of review determine the trustworthiness of software. I wrote about it in detail a while back. Roryokane was nice enough to host a cleaned up version here:
It seems like it depends on your threat model. If what your company is doing is valuable enough and you have a large enough organization, a motivated attacker will have access to the system’s source to run their offline analysis of it, regardless.
Background checks and interviews aren’t much of a barrier…
The issue is that open source can generally be patched by a sufficiently motivated individual when the security hole is found. If you have a proprietary firmware blob, that isn't going to happen unless there is monetary incentive for the manufacturer to do so.
Let's not forget that each security fix made to Open Source software is also a recipe on how to pwn people who didn't update to that fix yet. A project changelog is in part a list of holes that can be exploited.
Note that the recently-popular "Docker Desktop" product (I believe it uses the xhyve framework in OSX) that brings up a VM to run a linux-based Docker daemon on a local OSX system is not f/oss. The source isn't even publicly available.
It surprised me when I found out, considering that all of the other tools shipped by Docker have been.
Great overview of much of the firmware and general "OS" stack (for lack of a better word).
I'm surprised not to read a mention of the micro-code which all instructions our programs ask the processor to run actually convert to. I suppose that's starting to get too close to a discussion of open hardware, which this post mostly sidesteps. Both are important issues.
Great blog post Jess! I think this is an extension of Kerckhoff's Principle that a secure cryptosystem should be able to keep your data secure even if everything (except the key) is compromised: https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
Is there a Dockerfile or bash script anywhere that demonstrates how to install all these tools on bare metal? I operate at a higher level in the tech stack and I'm unfamiliar with these tools and how they work. A Dockerfile would be nice because then you could create a virtualish environment where you could play with the new stuff in docker exec before blowing away the old stuff.
Not a dockerfile, but it may be worth looking at buildroot [0] and qemu [1]. I'd like to say that I started 5 years ago with these tools and ended up working on embedded systems, but it's more like I started 5 years ago and ended up with drawers full of unsupported ARM boards.
Not according to AMD. It's more an organizational issue, since to make it open, they'll need to maintain two versions. One without DRM garbage (HDCP), and one with it. And it costs more to do it naturally. As usual, DRM poison ruins technology.
It's a nice system, and definitely cheaper, but it isn't a reasonable price comparison as it isn't even remotely in the same performance ballpark as POWER9.
Depends on the workload and the # of cores in the POWER9 chip(s). The Socionext has 24 cores!
Edit: Admittedly only massively parallel workloads will be faster than the POWER9, and only if the POWER9 has limited cores. A bit of a stretch for most use cases.
“Rings 1 & 2 - Device Drivers: drivers for devices, the name pretty much describes itself.”
STOP and GEMSOS did use four rings. The evaluators griped that UNIX’s didn’t. Microkernel proponents kept pushing mainstream OS’s to move drivers from kernel mode to another mode. Maybe my memory is off again but arent the drivers for Nix’s and most monolithic kernels in kernel mode (Ring 0)? Maybe Xen does something different with its use of protected mode. Then, some things in user-mode later on like FUSE.
“Each of these kernels have their own networking stacks and web servers. The code can also modify itself and persist across power cycles and re-installs. We have very little visibility into what the code in these rings is actually doing”
Which is why I put money down that the backdoors NSA paid for in direct money and/or defense contracts would be in management systems. That we’d definitely find services with 0-days in there. Sure enough…
“Linux is already quite vetted and has a lot of eyes on it since it is used quite extensively.”
That’s total nonsense. Empirical evidence below that’s been consistent over long periods of time. If anything, using Linux is guaranteeing you vulnerabilities if they can call anything in the kernel. If a subset or just one function, maybe OK. Careful analysis case by case on that. We’d be better off with something clean-slate for this purpose that can reuse Linux drivers where necessary. Then, we’d check the drivers and the interfaces.
It is true that it’s better than the closed-source stuff they’re using, has better tooling, folks understand it better, and so on. All true.
“We need open source firmware for the network interface controller (NIC), solid state drives (SSD), and base management controller (BMC).”
The problem with this and Intel/AMD internals are that they’re secretive partly to avoid patent suits and new competition. You’re not getting this stuff opened. Not easily at least. Might be better to literally do a closed-source product for them vetted by multiple parties. Otherwise, get the actual specs under NDA to build the open-source code against in a way that doesn’t leak the specs a lot. Alternatively, gotta build your own hardware doing this yourself with whatever the I.P. vendors give you. I mean, good luck on the reverse engineering efforts but these are usually lagging behind.
“We need to have all open source firmware to have all the visibility into the stack but also to actually verify the state of software on a machine.”
You actually need open, secure hardware for that since attackers are now hitting hardware. I kept telling people this would happen. Just wait till they do analog and RF more. What she’s actually saying here is “verify the state of the machine if the hardware works and is honest and doesn’t do anything malicious between verifications.”
“ is the same code running on hardware for all the various places we have firmware. We could then verify that a machine was in a correct state without a doubt of it being vulnerable or with a backdoor.”
Case in point: I put a secret coprocessor on the machine for “diagnostic purposes,” it can read state of system, it can leak over RF or network, and we leak stuff out of that signed, crypto code. Good thing no major vendors are including hidden or undocumented coprocessors on their chips. ;)
“Chromebooks are a great example of this, as well as Purism computers. You can ask your providers what they are doing for open source firmware or ensuring hardware security with roots of trust.”
End with some good advice: buy stuff that’s more open and secure to get more of it. Market demand incentivizing suppliers. That could solve a lot of these problems if enough people do it.
Rings 1&2 are basically useless on x86_64 because they give you the same access to memory as the kernel, they just don't let you execute privileged instructions directly.
On 32-bit x86, ring 1 at least got used for hypervisors (VMWare, Virtual box, and Xen off the top of my head). I half remember that OS/2 used the middle rings too.
I think the protection model of four rings was just copied from VAX, being the closest thing to big iron that x86 protected mode was inspired from.
re 1&2. Ok, that's what I was thinking. Thanks for the refresher.
re protection model. Nah, it was MULTICS from a Saltzer and Shroeder paper. They're among the pioneers of INFOSEC in high-assurance security which I'm often talking about here. They describe their reasoning about that here [1]. It, segments, and an IOMMU were in SCOMP, the first system certified to high security. Early promoter Roger Schell got an ex-Burroughs guy that Intel hired to add the rings and segments to their chips so high-assurance, security kernels could use them. The one he backed and got certified, GEMSOS, did leverage about every security feature on Intel CPU's. STOP used all the rings. GEMSOS had a hybrid scheme. BAE was selling STOP with Aesec still selling GEMSOS. Threw in a link on security kernels if you want to check that out. Today's state of the art moved on to secure hardware/software architectures using a mix of formal verification and language-level security on top of other QA activities. The competition used type enforcement [3] and capability security [4].
Actually, at least a bit of it does exist. There are two different "OpenBMC"s. The IBM/Rackspace one is used for POWER9, as in the Summit and Sierra supercomputers.
Another effort in the free space -- a different part from Talos -- is EOMA68 https://www.crowdsupply.com/eoma68 with a parallel effort for RISC-V.
It's a nice exception to the rule. IBM has enough patents to crush anyone that messes with them. So, they're not as worried. Don't forget older PPC and SPARC boxes with Open Firmware, too. I have one at the house from 2003 that can run Youtube vids.
First of all love it that someone is thinking about bootloaders. Thank you and I hope you're successful in this project.
I think that the article though is only targeted towards desktop PC/laptop/servers and mobile phones. Also not sure whether the it is talking about first level bootloader vulnerabilities or of second level bootloader vulnerabilities.
In the embedded world there often is no second stage loading, there are simply bootloaders. There are many, many bootloaders and opensource is the most popular option here, both first and second level.
Here's a table of hardware filtered by the booloaders used
I think we can use the research done opensource router os like openwrt[1] to design a BIOS that works across all devices. One interesting point to note here is that in many routers entire bootloader can be replaced easily using network booting. It takes seconds to flash the ROM (network booting is in-secure in theory but secure in practical since you need physical connectivity to book via a network).
While many modern machines support network booting,replacing the first level bootloader (BIOS) is (impossibly) hard.
Distributions of linux use GRUB which is nice and also opensource. But again its a second stage bootloader that comes into play after BIOS (first stage bootloader) has been executed.
I'd love to see more development in u-boot as they have already done the hard work of supporting multiple devices [2] and amazingly they also support direct booting from an SD card (not an sd card adapter via a usb stick).
Here is the list of architectures supported
/arc
/arm
/m68k
/microblaze
/mips
/nds32
/nios2
/openrisc
/powerpc
/riscv
/sandbox
/sh
/x86
Another key point to note is as a user there is very little control that I have on my bootloader (first level). Since it is loaded from a ROM which I can't replace/rewrite even if opensource firmware exists I can't use it. While I can install a new operating system I have not found any easy way to switch firmwares. Unless a project like linux foundation takes it up and brings together the stake holders to use an opensource firmware I think it will be really difficult to get adoption.
On the other hand bootloader is probably the only piece of software left that gives device manufactures some kind of control over their hardware. What's in it for them to use a free opensource technology?
Agree with the author that UEFI is bad for security. You have this huge binary UEFI blob in a pre os boot environment that does not run open source. After the motherboard,laptop manufacturer looses interest and they loose interest as soon as the product does not sell more new products UEFI remains unpatched and insecure.
The boot loader should be simple and relatively dumb IMHO, then it is secure. If it should be bigger then it should be Open source.
Management processors like Intel ME built into the CPU, firmware another x86 insecurity.
UEFI is poorly understood by approximately everybody who doesn't work directly with it and it's frustrating to see so much misinformation out there.
UEFI does not mean that there is a huge binary blob that does not run open source. UEFI is a spec. It defines many steps that must be taken to boot in a compliant way. Large portions of the code that runs in UEFI compliant systems in the wild today are in fact based on an open source 'core' available on github. It is entirely possible to perform a UEFI boot on an entirely open firmware stack, though this tends not to be done. Large silicon vendors like to keep their silicon initialization code proprietary and secret, and they often 'require' tweaks to the open source version of the UEFI 'core' to meet their needs (read: it's seen as easier/cheaper/more-business-friendly to fork the open source core and sprinkle two or three changes throughout and keep the result closed), but there's no reason it needs to be that way.
The author is wrong - there is no 'UEFI kernel' running at any ring after boot. UEFI leaves some code and data in ordinary, OS-accessible memory which can be jumped to and run by the OS if desired to perform some UEFI-related task like setting a boot variable. This code is not protected or hidden or in a special ring and does not require any special steps to invoke. It just sits there waiting to be called and can be modified or deleted if the OS chooses to do so.
SMM is actually a special ring with its own privileges, but there is nothing baked into UEFI which requires using it or leaving code running there. UEFI is extensible and so some platform performing a UEFI boot can leverage a hardware feature like SMM to maintain some control over a platform, but that requires the firmware developers to go out of their way to do that and it can only be done on hardware equipped for it.
But you know what? UEFI is not at fault there. If a platform performed a UEFI boot without touching/configuring SMM at all, then the OS or the bootloader could do the same thing. The hardware capability exists and is accessible by ring 0 until somebody flips a switch to remove that accessibility.
Proprietary, insecure software is a problem. Making firmware fall into that hole is really bad. But UEFI doesn't make that happen. UEFI is just a way of booting that doesn't specifically disallow it, because it's designed to be flexible and extensible and powerful so that a lot of needs can be met. It's completely possible to put together a firmware image which is UEFI compliant and goes out of its way to disable SMM (or any other hardware feature), and boot to an OS that wipes UEFI traces from memory if it feels like it.
Something like Intel's ME existing as an option for businesses who want it is fine. Injecting it into every platform and making it roughly impossible to disable is not. Either way, UEFI is not implicated.
UEFI is not the bad guy. Those who ship UEFI compliant systems which happen to suck are the bad guys. They do it with UEFI, they did it before UEFI, and they would do it without UEFI.
UEFI is still a bad guy, because it is overengineered. And precisely because of that FastBoot mode was invented. Neither Windows, nor Linux (or whatever else) require 90% of UEFI features.
2 replies →
> Those who ship UEFI compliant systems which happen to suck are the bad guys.
This. The UEFI implementation on my XPS 13 9343 can't pass kernel command line, unfortunately. Ideally, I would have liked to boot straight into an EFISTUB kernel. Thankfully, there is rEFInd, which I boot as a secondary bootloader.
Hi,
Please can you post the link to the github repo? I'm can't seem to find it.
Also, which license is it published under? How are people allowed to publish a fork without publishing the code?
2 replies →
>The boot loader should be simple and relatively dumb IMHO, then it is secure. If it should be bigger then it should be Open source.
I do not see why it would be different for any other software.
It's not, in my opinion. And if the piece of software is simple and dumb, I don't really see why it should not be open source neither.
Well simple and dumb generally is good for all engineered things, as long as they aren't so dumb that they can't do their job.
This principle applies especially well to boot loaders, because the job of a bootloader is to hand control off to some other, more sophisticated piece software.
Well, you can sandbox anything that is loaded on top of a Free layer, you computer's boot firmware is certainly not on such place.
Unpatched UEFI is more about the reputation of the manufacturer/provider of UEFI. If a major manufacturer releases a motherboard, you can be sure (1) they are patched often and (2) they use similar components across many motherboards so bugs and vulnerabilities are patched across many simultaneously are worked out sooner
Patches existing for the firmware vulnerabilities of major manufacturers is good (I'll take your word for it, having not looked recently, but I know that a few years back this was not the case and known vulnerabilities could be found easily on shipping products).
The pathway from the patch existing to the patch being applied is overgrown with flammable brush. Infrequently traveled. Not healthy. There are efforts to fix this, but they don't have too much momentum at the moment.
2 replies →
1a. Often - that much is true; initially. 1b. For the lifetime of the product? Nope nope nope nope, not in my wildest dreams. What's wrong with a stable, well-built, functioning motherboard? Nothing, just that some years have passed and the mfg no longer has an incentive for support.
Videos from the 2018 Open Source Firmware conference are available: https://osfc.io/archive
See also "Firmware is the new Software" from Trammell Hudson, https://www.platformsecuritysummit.com/2018/speaker/hudson/
> Secure firmware is the foundation of secure systems. If we want to build slightly more secure systems they will require open, auditable and measured firmware. If we can’t read and audit the firmware code, we can’t reason about what is going on during the critical phases of the boot process; if we can’t modify and reproducibly build the firmware, we can’t fix vulnerabilities or tailor it to our needs; and if the firmware isn’t measured and attested, we can’t be certain that our system hasn’t been tampered with.
I agree- this is a very PC focused article but firmware is everywhere. It needs to be updateable and making it open makes it far easier to update.
As far as UEFI goes, I'd just like to point out Microsoft's open source Firmware efforts (Project Mu). https://microsoft.github.io/mu/ The goal is to make firmware easier to service and easier to update with security fixes for older projects.
While it's not perfect, it is a great step forward. I think we need to see more of this in the future from other companies.
(Disclaimer: I work for Microsoft and contribute to Project MU).
Reading up on Project MU now, and I'm sorry, what is this about firmware as a service? This seems like exactly what I don't want. XaaS (X as a Service) is great when there's something external that is only temporarily or optionally required, (or too expensive). Otherwise it's an ongoing dependency. But with firmware, once I own the hardware, I should own the firmware too.
Without knowing more about this specific Firmware as a Service I can only imagine how this will actually look. Maybe it just means that updates are automatic? Even that alone is an interesting debate.
Otherwise, Prohect MU looks like a modern wrapper around UEFI, what's being done to address the fundamental issues? Firmware code running after boot, modifications to firmware possible by changing code between boots, etc...
The only firmware code running after boot that UEFI mandates is not below ring 0 and is fully optional - called only if and when the OS asks for it. The UEFI runtime services table is not a kernel and is parked in ordinary memory waiting to be jumped to/called.
SMM is supported but not mandated, which is exactly how any hardware feature should be treated. Blame those enabling the SMM code you don't like. Or blame the hardware manufacturer for putting the feature in at all.
UEFI is not your enemy. Its only sin is being overly complicated, which is (somewhat) debatable given the complexity of systems and OSes needing to be bootable.
5 replies →
Yeah I agree- firmware as a service doesn't exactly capture what Project Mu is trying to do. But the point is, firmware should be easy to update as any service rather than some huge monolithic codebase that was forked the master (TianoCore) and then hammered on until the platform booted.
In a perfect world a product would ship with perfect bug free and secure firmware and it would never need updates. And ideally, manufactures would allow for users to more easily install their own UEFI/firmware onto their device but that brings in some added challenges of security.
Since developers make mistakes, updates are currently the best solution we have. Making those updates more affordable to service, making the changes transparent to the end-users via OSS, and making it easier to apply those updates are all things Project Mu is trying to accomplish.
IIRC Microsoft was instrumental with a few other companies in developing the rather arcane and overbearing ACPI standard in the 90's that continues to make it difficult for non-Windows operating systems to reliably work with a laptop's hardware even today.
I've played with U-Boot on ARM platforms. It's a breath of fresh air. It loads the OS and then just gets out of the way. This is what simplified PC firmware should be.
While there's a lot of MU I could take or leave and some I don't really see the value of, I greatly enjoy that there's finally at least one effort out there to wrangle the build system. Having not used MU's build yet, I can't say if you've succeeded, but I applaud the effort since current "standard" build processes and scripts and wrappers in use are.... not good.
Disclosure. You're exactly _not_ disclaiming. :)
Why did Microsoft fork TianoCore into Project MU instead of contributing directly upstream to the TianoCore project?
I wish the smartphone side of things gets more attention and someone engineers their way to an open source baseband firmware ala OpenWrt [0]. Not sure why after the breakthrough for GSM/2G with OsocomBB [1], no viable libre alternative LTE or 4G has emerged [2].
The smartphone is an always-on, always-connected computer with 2B or more (30% of world's population) unsuspecting (to almost a point of being gullible) BSD/Linux users who are exposed and don't even know it, to an unprecedented degree, to adversaries with deep pockets (ad-networks [3], nation-states [4], carriers [5]) who don't need a second invitation. Most of the privacy and security battles, I feel, will be won and lost with smartphones. That's discounting IoT security altogether which is a scary proposition in itself for rather silly reasons [6][7].
RMS has had the right idea all along?
--
[0] https://news.ycombinator.com/item?id=11266796
This is a very PC centric article, but the same can be said about any connected device--from cars, to baby monitors, to buttplugs. If it has an internet connection, the firmware should either be open source or in open source escrow, so that if the company dies or decides to not support their device anymore, the hardware itself can continue to live.
A concrete argument for this: Qualcomm recently released a patch for a vulnerability that makes it possible to access private data stored in the TrustZone of many of its SoCs: https://www.nccgroup.trust/us/our-research/private-key-extra...
The patch needs to be provided by phone and tablet manufacturers. Except that many otherwise capable phones are not supported anymore and will not be fixed.
Were the firmware of these devices open source, the community could fix this (given that the firmware does not have to be signed, or a signing key can be added). But no, many devices will remain forever vulnerable. Including my phone, 4G RAM, 32G internal storage, excellent battery and screen, great computing capabilities, in a excellent physical shape. Will probably last a few years more. Last updated on November 2017 by its manufacturer. Some parts will never be updated again and there is no way to audit this stuff. This is a shame.
Edit: and I'm lucky my phone resembles an Android One phone, so some stuff can be taken from this phone to update mine.
Perhaps software in general shouldn't be provided _as is_ anymore. The idea that someone provides a software and it's your problem if it doesn't work is really... _too easy_.
Company A sells you a cell phone. In a reasonable time (5 years? 10?) a flaw is found. You, the costumer, can fix it? No, because it depends on proprietary code, a key, some DRM, whatever.
So company A should fix it or be accountable for the problem. Being sued, paying for it. Or open the hardware so that user can fix it.
1 reply →
Source escrow is an interesting concept. My initial thought it that would create huge perverse incentives. A company might continue to release small, nonsense updates to products they otherwise don't care about, just to avoid giving the source away. Meanwhile, as the owner of a device, I would be eagerly awaiting the death of the company, so I can get my hands on that juicy source code. I certainly wouldn't recommend their products! Anything to make them die more quickly.
It gets even more interesting when the "secret sauce" is in software. Say I market "SmartButton 1.0", which is nothing more than an ESP8266 connected to a button, but with some cunning algorithms and proprietary protocols that make it useful. Under an escrow system, I'll have a competitor on my hands the second I stop supporting SmartButton 1.0. Even if I'm already on SmartButton v19.
The SmartButton scenario is exactly the incentive we want, isn't it?
If you have made genuine improvements in versions 2-19, releasing version 1 shouldn't hurt too much. If, on the other hand, your versions 1 and 19 are still substantially similar, you shouldn't stop supporting version 1 to save a small cost and completely destroy the value of the product for your customers.
One of the biggest problems is the lower-level chip vendors, who often require NDAs and won't allow their code to be shared publicly. The device maker has to comply with this or find another chip, which may not be available in sufficient quantities or at a realistic price point. The chip vendors don't necessarily go out of business, even if the device maker does.
Considering the global impact on security, this is an area that would make sense for regulation. At some point, the chip vendors should have to release their code to maintainers. I'd even be fine with limiting this to after the chip goes EOL! Perhaps it could come with guarantees reducing patent infringement risks, which may be where much of the vendor reluctance comes from.
I think that's a good idea, but ARE there real and reliable escrow agents for this kind stuff?
Is it better than, say, a gentleman's agreement to make the source public on github (or some other channel) upon dissolution of company?
Software escrow agents are definitely a thing (e.g. https://www.nccgroup.trust/uk/our-services/software-escrow-a... )
(Disclaimer, I work for that company, but not in the escrow area)
> from cars, to baby monitors, to buttplugs
Would highly recommend a penetration test for these.
Although mentioned in the article, I would like to emphasize that https://puri.sm are selling laptops with disabled and cleaned (with me_cleaner) bios. I hope more companies follow.
Purism seems to be rather inept, they shat the bed with their recent Librem One product launch, whereby they rebranded Tusky and disabled all moderation tools on their Mastodon instance, then were suprised when their employees started quiting due to this ridiculous behaviour.
Purism doesn't seem to want to invest in the software that makes their services work, hence the commentary by Matrix devs on which services are helping to push development forward (and thus deserve subscribers).
Where can I read the details of this? (I have preordered a Librem 5. Employees quitting doesn't sound good.)
1 reply →
System76 also does this for some machines: https://news.ycombinator.com/item?id=15819636
The purism devices look great but they are so expensive and its really hard to justify a $900 phone that has worse specs than my old $300 one
Awesome post, but a very minor nitpick:
> Chromebooks use both, coreboot on x86, and u-boot for the rest.
This isn't entirely true. Coreboot is used on a number of ARM Chromebooks, including rk3288 and rk3399 based devices. It seems like u-boot is used less and less in the space. Libreboot has builds for a few devices that kill the annoying "untrusted os" message, and even allow you to set your own trust root.
Is there a way to flash Pixelbooks?
Being able to reflash a current nvme Pixelbook with my own trust root and build and sign my own OS images would be super excellent.
The platform security of the Pixelbook is lovely; the only way it could be better is if I were able to control it.
I'm not sure -- I haven't messed with a Pixelbook. On Chromebooks, the trust root lives in the bootloader's flash with no security beyond the write protect screw.
A quick googling led to a reddit post that indicates it's possible with the Pixelbook, but likely a PITA:
https://www.reddit.com/r/PixelBook/comments/7kv944/does_anyo...
https://mrchromebox.tech
I'm surprised there is no mention of the Raptor Talos II, which is designed to be auditable in exactly this fashion.
I am also perplexed why the Raptor Talos II does not receive more attention in the open hardware/software community. For me it is basically everything I've ever wanted from a libre system.
Because it costs as much as my car and has been on backorder for a long time?
5 replies →
One interesting thing about the Raptor devices is this reverse-engineering and reimplementation project for the NIC firmware:
https://www.devever.net/~hl/ortega https://github.com/hlandau/ortega https://github.com/meklort/bcm5719-fw https://news.ycombinator.com/item?id=19679640
I have a completely honest question here that I'm hoping some people can answer. Is open source really more secure? My default answer would be yes absolutely but when I think about it I'm not sure I understand why.
If something is open source then bugs and security problems can be found more easily and then fixed. This sounds great to me and I'm sure that works out just fine most of the time. This makes me wonder though...are there really fewer intrusions into production systems that are built entirely on open source software than there are in ones built with lots of proprietary, closed source software? What does the data look like about this stuff?
I can't speak to the data analysis part, though I do believe some people have looked into it, and hopefully they can add their thoughts.
From my experience, the answer is: it depends very much on the community the project has.
First, the obvious positives: you could have lots of people with lots of different kinds of experience looking at the code, finding and fixing things.
This is how I got involved in Firebug back in the day. But I also noticed that while millions of developers used it daily, the number that got all the way to the issue reporter were small, and the number that posted fixes in an issue were minimal (I got to know them by name). Only once do I remember a security issue being reported, considering that extensions had such broad and unlimited access back in then.
So, if it does not invite that kind of community, then it is possible to be a net negative with only blackhats having a reason to inspect the code. OR, you have a social problem within the community (also common), where people assume that with such a large community, surely someone looked at X. Everyone thinks that, so no one looks at X. Years later someone does and finds some surprising things in code that withstood the test of time.
That said, I think the case of UEFI would be different. It might be a good candidate for shared source at least, if it isn't already.
I guess it's the principle of "many eyes make all bugs shallow".
If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.
You can't know what bugs are in code for which you do not have the source, and the pool of people reading it is likely to be much smaller.
>If the source is freely available, then every day someone is going to read it and maybe see/fix the bug.
How many years did Heartbleed go unnoticed? How many exploits in open source software get reported here?
It's not true that someone reads all of the open source code every day. The truth is, few people ever read any of it, and fewer still have the domain expertise necessary to be able to spot and patch any obvious bug, much less subtle ones. And yet this metaphysical belief in the "many eyes" persists.
Sure, it exists, but there are supposed to be eyes on the proprietary code as well, and the effect is probably smaller than people think, with no one outside of a project's maintainers ever actually studying the code for most open source projects.
1 reply →
Open source software is just software. That is to say it is just as secure, or insecure, as any other available software.
The open source model, however, allows for incremental improvements, patching, security updates and auditing from the community that the typical closed source model neglects to provide.
It may also be helpful to frame this as "less insecure", rather than "more secure".
> Is open source really more secure?
I think the trend now is to believe that closed software that is actively maintained by a well resourced party is more secure than open software that is barely maintained by whoever contributes.
Binary blobs for hardware that has long shipped doesn't really fall into the "actively maintained" category. At least not reliably.
It's not more secure: it's just potentially easier to review for security. The lifecycle of software and quality of review determine the trustworthiness of software. I wrote about it in detail a while back. Roryokane was nice enough to host a cleaned up version here:
https://events.linuxfoundation.org/wp-content/uploads/2017/1...
It seems like it depends on your threat model. If what your company is doing is valuable enough and you have a large enough organization, a motivated attacker will have access to the system’s source to run their offline analysis of it, regardless.
Background checks and interviews aren’t much of a barrier…
> Is open source really more secure?
Probably not, but ...
The issue is that open source can generally be patched by a sufficiently motivated individual when the security hole is found. If you have a proprietary firmware blob, that isn't going to happen unless there is monetary incentive for the manufacturer to do so.
As long as you remember to update.
Let's not forget that each security fix made to Open Source software is also a recipe on how to pwn people who didn't update to that fix yet. A project changelog is in part a list of holes that can be exploited.
It’s necessary but not sufficient.
Slightly offtopic, but relevant:
Note that the recently-popular "Docker Desktop" product (I believe it uses the xhyve framework in OSX) that brings up a VM to run a linux-based Docker daemon on a local OSX system is not f/oss. The source isn't even publicly available.
It surprised me when I found out, considering that all of the other tools shipped by Docker have been.
Great overview of much of the firmware and general "OS" stack (for lack of a better word).
I'm surprised not to read a mention of the micro-code which all instructions our programs ask the processor to run actually convert to. I suppose that's starting to get too close to a discussion of open hardware, which this post mostly sidesteps. Both are important issues.
Great blog post Jess! I think this is an extension of Kerckhoff's Principle that a secure cryptosystem should be able to keep your data secure even if everything (except the key) is compromised: https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
Is there a Dockerfile or bash script anywhere that demonstrates how to install all these tools on bare metal? I operate at a higher level in the tech stack and I'm unfamiliar with these tools and how they work. A Dockerfile would be nice because then you could create a virtualish environment where you could play with the new stuff in docker exec before blowing away the old stuff.
Hey, also look into post compromise security eg see https://eprint.iacr.org/2016/221.pdf
This stuff is way off Dockerfile level, sorry.
Not a dockerfile, but it may be worth looking at buildroot [0] and qemu [1]. I'd like to say that I started 5 years ago with these tools and ended up working on embedded systems, but it's more like I started 5 years ago and ended up with drawers full of unsupported ARM boards.
0: https://buildroot.org
1: https://www.qemu.org/
I would start with qemu and uefi coreboot. Docker is too high level for this.
I wish AMD would provide open source AGESA, GPU firmware and CPU microcode.
Opensource GPU firmware I am willing to bet has license issues if they were to try and go open source.
Not according to AMD. It's more an organizational issue, since to make it open, they'll need to maintain two versions. One without DRM garbage (HDCP), and one with it. And it costs more to do it naturally. As usual, DRM poison ruins technology.
Curious what the author thinks about open sourcing the secure enclave co-processor in iPhone/iPad/etc.
The Socionext E-series Developerbox has open source firmware and is at least cheaper than the Raptor stuff.
It's a nice system, and definitely cheaper, but it isn't a reasonable price comparison as it isn't even remotely in the same performance ballpark as POWER9.
https://www.phoronix.com/scan.php?page=article&item=arm-24co...
Depends on the workload and the # of cores in the POWER9 chip(s). The Socionext has 24 cores!
Edit: Admittedly only massively parallel workloads will be faster than the POWER9, and only if the POWER9 has limited cores. A bit of a stretch for most use cases.
I’m curious about NERF.
What hardware do I need to run it? Is the HW-support as limited as Coreboot?
And what do I do if I decide it’s not for me? Are there known safe ways to revert to your mainboards factory firmware?
Great overview, I have had many disagreements with IT managers that blame open source for security breaches.
Nice article. A few comments:
“Rings 1 & 2 - Device Drivers: drivers for devices, the name pretty much describes itself.”
STOP and GEMSOS did use four rings. The evaluators griped that UNIX’s didn’t. Microkernel proponents kept pushing mainstream OS’s to move drivers from kernel mode to another mode. Maybe my memory is off again but arent the drivers for Nix’s and most monolithic kernels in kernel mode (Ring 0)? Maybe Xen does something different with its use of protected mode. Then, some things in user-mode later on like FUSE.
“Each of these kernels have their own networking stacks and web servers. The code can also modify itself and persist across power cycles and re-installs. We have very little visibility into what the code in these rings is actually doing”
Which is why I put money down that the backdoors NSA paid for in direct money and/or defense contracts would be in management systems. That we’d definitely find services with 0-days in there. Sure enough…
“Linux is already quite vetted and has a lot of eyes on it since it is used quite extensively.”
That’s total nonsense. Empirical evidence below that’s been consistent over long periods of time. If anything, using Linux is guaranteeing you vulnerabilities if they can call anything in the kernel. If a subset or just one function, maybe OK. Careful analysis case by case on that. We’d be better off with something clean-slate for this purpose that can reuse Linux drivers where necessary. Then, we’d check the drivers and the interfaces.
https://events.linuxfoundation.org/wp-content/uploads/2017/1...
It is true that it’s better than the closed-source stuff they’re using, has better tooling, folks understand it better, and so on. All true.
“We need open source firmware for the network interface controller (NIC), solid state drives (SSD), and base management controller (BMC).”
The problem with this and Intel/AMD internals are that they’re secretive partly to avoid patent suits and new competition. You’re not getting this stuff opened. Not easily at least. Might be better to literally do a closed-source product for them vetted by multiple parties. Otherwise, get the actual specs under NDA to build the open-source code against in a way that doesn’t leak the specs a lot. Alternatively, gotta build your own hardware doing this yourself with whatever the I.P. vendors give you. I mean, good luck on the reverse engineering efforts but these are usually lagging behind.
“We need to have all open source firmware to have all the visibility into the stack but also to actually verify the state of software on a machine.”
You actually need open, secure hardware for that since attackers are now hitting hardware. I kept telling people this would happen. Just wait till they do analog and RF more. What she’s actually saying here is “verify the state of the machine if the hardware works and is honest and doesn’t do anything malicious between verifications.”
“ is the same code running on hardware for all the various places we have firmware. We could then verify that a machine was in a correct state without a doubt of it being vulnerable or with a backdoor.”
Case in point: I put a secret coprocessor on the machine for “diagnostic purposes,” it can read state of system, it can leak over RF or network, and we leak stuff out of that signed, crypto code. Good thing no major vendors are including hidden or undocumented coprocessors on their chips. ;)
“Chromebooks are a great example of this, as well as Purism computers. You can ask your providers what they are doing for open source firmware or ensuring hardware security with roots of trust.”
End with some good advice: buy stuff that’s more open and secure to get more of it. Market demand incentivizing suppliers. That could solve a lot of these problems if enough people do it.
Rings 1&2 are basically useless on x86_64 because they give you the same access to memory as the kernel, they just don't let you execute privileged instructions directly.
On 32-bit x86, ring 1 at least got used for hypervisors (VMWare, Virtual box, and Xen off the top of my head). I half remember that OS/2 used the middle rings too.
I think the protection model of four rings was just copied from VAX, being the closest thing to big iron that x86 protected mode was inspired from.
re 1&2. Ok, that's what I was thinking. Thanks for the refresher.
re protection model. Nah, it was MULTICS from a Saltzer and Shroeder paper. They're among the pioneers of INFOSEC in high-assurance security which I'm often talking about here. They describe their reasoning about that here [1]. It, segments, and an IOMMU were in SCOMP, the first system certified to high security. Early promoter Roger Schell got an ex-Burroughs guy that Intel hired to add the rings and segments to their chips so high-assurance, security kernels could use them. The one he backed and got certified, GEMSOS, did leverage about every security feature on Intel CPU's. STOP used all the rings. GEMSOS had a hybrid scheme. BAE was selling STOP with Aesec still selling GEMSOS. Threw in a link on security kernels if you want to check that out. Today's state of the art moved on to secure hardware/software architectures using a mix of formal verification and language-level security on top of other QA activities. The competition used type enforcement [3] and capability security [4].
[1] https://www.multicians.org/protection.html https://www.multicians.org/exec-env.html
[2] http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf
[3] https://cryptosmith.com/mls/lock/
[4] https://web.archive.org/web/20160304223007/https://www.cis.u...
2 replies →
Actually, at least a bit of it does exist. There are two different "OpenBMC"s. The IBM/Rackspace one is used for POWER9, as in the Summit and Sierra supercomputers.
Another effort in the free space -- a different part from Talos -- is EOMA68 https://www.crowdsupply.com/eoma68 with a parallel effort for RISC-V.
It's a nice exception to the rule. IBM has enough patents to crush anyone that messes with them. So, they're not as worried. Don't forget older PPC and SPARC boxes with Open Firmware, too. I have one at the house from 2003 that can run Youtube vids.
https://en.m.wikipedia.org/wiki/Open_Firmware
Gaisler had a GPL'd SPARC core to go with it, too. Oracle's T1 and T2 were open, too.
2 replies →
First of all love it that someone is thinking about bootloaders. Thank you and I hope you're successful in this project.
I think that the article though is only targeted towards desktop PC/laptop/servers and mobile phones. Also not sure whether the it is talking about first level bootloader vulnerabilities or of second level bootloader vulnerabilities.
In the embedded world there often is no second stage loading, there are simply bootloaders. There are many, many bootloaders and opensource is the most popular option here, both first and second level.
Here's a table of hardware filtered by the booloaders used
https://openwrt.org/toh/views/toh_admin_bootloader
- around 800 device types use uboot
- around 200 use cfe
Both of them are opensource.
I think we can use the research done opensource router os like openwrt[1] to design a BIOS that works across all devices. One interesting point to note here is that in many routers entire bootloader can be replaced easily using network booting. It takes seconds to flash the ROM (network booting is in-secure in theory but secure in practical since you need physical connectivity to book via a network).
While many modern machines support network booting,replacing the first level bootloader (BIOS) is (impossibly) hard.
Distributions of linux use GRUB which is nice and also opensource. But again its a second stage bootloader that comes into play after BIOS (first stage bootloader) has been executed.
I'd love to see more development in u-boot as they have already done the hard work of supporting multiple devices [2] and amazingly they also support direct booting from an SD card (not an sd card adapter via a usb stick).
Here is the list of architectures supported
/arc
/arm
/m68k
/microblaze
/mips
/nds32
/nios2
/openrisc
/powerpc
/riscv
/sandbox
/sh
/x86
Another key point to note is as a user there is very little control that I have on my bootloader (first level). Since it is loaded from a ROM which I can't replace/rewrite even if opensource firmware exists I can't use it. While I can install a new operating system I have not found any easy way to switch firmwares. Unless a project like linux foundation takes it up and brings together the stake holders to use an opensource firmware I think it will be really difficult to get adoption.
On the other hand bootloader is probably the only piece of software left that gives device manufactures some kind of control over their hardware. What's in it for them to use a free opensource technology?
[1] https://openwrt.org/docs/techref/bootloader
[2] https://en.wikipedia.org/wiki/Das_U-Boot
They found world's biggest fish Even you can't believe how big fish is this That's unbelievable http://bit.ly/2FHl4pM
A cute dog is dancing in video Just watch this video and try to control your laugh http://bit.ly/2ZK1XVs
See how a Turkey saved the life of his friends Even those birds also have love in their hearts http://bit.ly/2WhAiZS
Sometimes our feet smell are so bad even we feel ashamed due to our feet smell Now you can get rid from this just follow this simple method https://zoomtips.blogspot.com/2019/04/Smelly-feet.html
Guy made world record He drive car on two wheels See his video how he is driving http://bit.ly/2ZOYaWX
پودر جوانه گندم https://20to20.ir خرید عطر http://20to20.biz دانلود آهنگ جدید http://dlfun.ir دانلود موزیک ویدیو جدید http://downloadbazan.ir خبرهای بدون سانسور http://zamennews.ir دانلود آهنگ جدید بندری http://reza-sadeghi.ir http://top-store.ir http://reza-sadeghi.vcp.ir http://bigstars.persiangig.com خرید خرما http://sepandtd.com mazafati date http://livco.nl http://bidestan.com travel to Iran http://irantours.biz http://the20.4kia.ir https://20to20.shopfa.com http://the20.somee.com https://drfniazi.ir/what-we-offer/ http://ow.ly/AsAK30ot8n4 http://free-ad.ir https://gitlab.com/farshaddx619/20to20.ir https://gitlab.com/farshaddx619/20to20