Well I was wondering when the war on general computing and computer ownership would be carried into the heart of the open source ecosystems.
Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.
This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.
I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.
The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
> if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services
They would only do it to assert more control over you and in Netflix's case, force more ads on you.
It is why I never use any company's apps.
If they make it a requirement, I will just close my account.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
Yeah, as I am reading the landing page, the direction seems clear. It sucks, because as an individual there is not much one can do, and there is no consensus that it is a bad thing ( and even if there was, how to counter it ). Honestly, there are times I feel lucky to be as dumb as I am. At least I don't have the same responsibility for my output as people who create foundational tech and code.
Poettering is a well-known Linux saboteur, along with Red Hat.Without RH pushing his trash, he is not really that big of a threat.
Just like de Icaza, another saboteur, ran off to MS. That is the tell-tell sign for people not convinced that either person's work in FOSS existed to cause damage.
No, this is not a snarky, sarcastic comment. Trust Amutable at your own peril.
My tinfoil hat theory is devices like HDDs will be locked and only work on "attested" systems that actively monitor the files. This will be pushed by the media industry to combat piracy. Then opened up for para-law enforcement like palantir.
Then gpu and cpu makers will hop on and lock their devices to promote paid Linux like redhat. Or offering "premium support" to unlock your gpu for Linux for a monthly fee.
They'll say "if you are a Linux enthusiast then go tinker with arm and risc on an SD card"
> [T]he war on general computing and computer ownership [...] It is exhausting to see the hatred some have for people just owning their hardware.
The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.
This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.
You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
> You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
Once it's out there and normalized, the individual engineers don't get to control how it is used. They never do.
You want PCIe-6? Cool well that only runs on Asus G-series with AI, and is locked to attested devices because the performance is so high that bad code can literally destroy it. So for safety, we only run trusted drivers and because they must be signed, you have to use Redhat Premium at a monthly cost of $129. But you get automatic updates.
System integrity also ends at the border of the system. The entire ecosystem of ATM skimmers demonstrates this-- the software and hardware are still 100% sanctioned, they're just hidden beneath a shim in the card slot and a stick-on keypad module.
I generally agree with the concept of "if you want me to use a pre-approved terminal, you supply it." I'd think this opens up a world of better possibilities. Right now, the app-centric bank/media company/whatever has to build apps that are compatible with 82 bazillion different devices, and then deal with the attestation tech support issues. Conversely, if they provide a custom terminal, it might only need to deal with a handful of devices, and they could design it to function optimally for the single use case.
> At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
The problem is that there are powerful corporate and government interests who would love nothing more than to prevent users from controlling the keys for their own computers, and they can make their dream come true simply by passing a law.
It may be the case that certain users want to ensure that their computers are only running their code. But the same technologies can also used to ensure that their computers are only running someone else's code, locking users out from their own devices.
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.
They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.
This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.
Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.
In my case it is because I would never have the right amount with me, in the right denominations. Google Pay always has this covered.
Also you need to remember to take one more thing with you, and refill it occasionally. As opposed to fuel, you do not know how much you will need when.
It can get lost or destroyed, and is not (usually) replaceable.
I am French, currently in the US. I need to change 100 USD in small denominations, I will need to go to the bank, and they will hopefully do that for me. Or not. Or not without some official paper from someone.
Ah yes, and I am in the US and the Euro is not an accepted currency here. So I need to take my 100 € to a bank and hope I can get 119.39 USD. In the right denominations.
What will I do with the 34.78 USD left when I am back home? I have a chest of money from all over the world. I showed it once to my kids when they were young, told a bit about the world and then forgot about it.
Money also weights quite a lot. And when it does not weights it gets lost or thrown away with some other papers. Except if they are neatly folded in a wallet, which I will forget.
I do not care about being traced when going to the supermarket. If I need to do untraceable stuff I will get money from teh ATM. Ah crap, they will trace me there.
So the only solution is to get my salary in cash, whihc is forbidden in France. Or take some small amounts from time to time. Which I will forget, and I have better things to do.
Cash sucks.
Sure, if we go cashless and terrible things happen (cyberwar, solar flare, software issues) then we are screwed. But either the situation unscrews itself, or we will have much, much, much bigger issues than money -- we will need to go full survival mode, apocalypse movies-style.
Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.
The solution first adopted by the TCG (TPM specification v1.1) required a trusted third-party, namely a privacy certificate authority (privacy CA). Each TPM has an embedded RSA key pair called an Endorsement Key (EK) which the privacy CA is assumed to know. In order to attest the TPM generates a second RSA key pair called an Attestation Identity Key (AIK). It sends the public AIK, signed by EK, to the privacy CA who checks its validity and issues a certificate for the AIK. (For this to work, either a) the privacy CA must know the TPM's public EK a priori, or b) the TPM's manufacturer must have provided an endorsement certificate.) The host/TPM is now able to authenticate itself with respect to the certificate. This approach permits two possibilities to detecting rogue TPMs: firstly the privacy CA should maintain a list of TPMs identified by their EK known to be rogue and reject requests from them, secondly if a privacy CA receives too many requests from a particular TPM it may reject them and blocklist the TPMs EK. The number of permitted requests should be subject to a risk management exercise. This solution is problematic since the privacy CA must take part in every transaction and thus must provide high availability whilst remaining secure. Furthermore, privacy requirements may be violated if the privacy CA and verifier collude. Although the latter issue can probably be resolved using blind signatures, the first remains.
AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.
But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.
It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.
Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.
Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.
Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.
For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage
Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.
It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing
I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.
This seems like the kind of technology that could make the problem described in https://www.gnu.org/philosophy/can-you-trust.en.html a lot worse. Do you have any plans for making sure it doesn't get used for that?
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural:
Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
it will be railroaded through in the same way that systemD was railroaded onto us.
> The models we have in mind for attestation are very much based on users having full control of their keys.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
> The models we have in mind for attestation are very much based on users having full control of their keys.
FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.
Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.
This WILL be used to infringe on individual freedoms.
The only question is WHEN?
And your answer to that appears to be 'Not for the time being'.
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.
Please don't bring attestation to common Linux distributions. This technology, by essence, moves trust to a third party distinct of the user. I don't see how it can be useful in any way to end users like most of us here. Its use by corporations has already caused too much damage and exclusion in the mobile landscape, and I don't want folks like us becoming pariahs in our own world, just because we want machines we bought to be ours...
A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.
All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.
> A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.
If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.
Attestation is a critical feature for many H/W companies (e.g. IoT, robotics), and they struggle with finding security engineers who expertise in this area (disclaimer: I used to work as a operating system engineer + security engineer). Many distros are not only designed for desktop users, but also for industrial uses. If distros ship standardized packages in this area, it would help those companies a lot.
This is the problem with Linux in general. It's way too much infiltrated by our adversaries from big tech industry.
Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.
This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)
I'm not too big in this field but didn't many of those same IOT companies and the like struggle with the packages becoming dependent on Poeterings work since they often needed much smaller/minimal distros?
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
Whoever uses this seeks to ensure a certain kind of behavior on a machine they typically don't own (in the legal sense of it). So of course you can make it optional. But then software that depends on it, like your banking Electron app or your Steam game, will refuse to run... so as the user, you don't really have a choice.
I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...
My only experience with Linux secure boot so far.... I wasn't even aware that it was secure booted. And I needed to run something (I think it was the Displaylink driver) that needs to jam itself into the kernel. And the convoluted process to do it failed (it's packaged for Ubuntu but I was installing it on a slightly outdated Fedora system).
What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.
So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.
Secure Boot only extends the chain of trust from your firmware down the first UEFI binary it loads.
Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.
Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.
It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.
here is some actual security: encrypted /boot, encrypted everything other than the boot loader (grub in this case)
sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)
so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing
so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots
the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
however, this is still better than, at failure, letting anything run
sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level
There is the integrity measurement architecture but it isn't very mature in my opinion. Even secureboot and module signing is a manual setup by users, it isn't supported by default, or by installers. You have to more or less manage your own certs and CA, although I did notice some laptops have debian signing keys in UEFI by default? If only the debian installer setup module signing.
But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?
I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.
It is not useless. I'm using UKI, so initrd is built into the kernel binary and signed. I'm not using bootloader, so UEFI checks my kernel signature. My userspace is encrypted and key is stored in TPM, so the whole boot chain is verified.
Isn’t the idea that the kernel will verify anything beneath it. Secure boot verifies the kernel and then it’s in the hands of the kernel to keep verifying or not.
Isn't it possible to force TPM measurements for stuff like the kernel command line or initramfs hash to match in order to decrypt the rootfs? Or make things simpler with UKIs?
Most of the firmwares I've used lately seem to allow adding custom secureboot keys.
There is some level of misinformation in your post. Both Windows and Linux check driver signatures. Once you boot Linux in UEFI Secure Boot, you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode. You have to sign all of the drivers within the same PKI of your UEFI key.
Remote attestation is another technology that is not inherently restrictive of software freedom. But here are some examples of technologies that have already restricted freedom due to oligopoly combined with network effects:
* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)
It very clearly is restrictive of software freedom. I've never suffered from an evil maid breaking into my house to access my computer, but I've _very_ frequently suffered from corporations trying to prevent me from doing what I wish with my own things. We need to push back on this notion that this sort of thing was _ever_ for the end-user's benefit, because it's not.
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
The authors clearly don’t intend this to happen but that doesn’t matter. Someone else will do it. Maybe this can be stopped with licensing as we tried to stop the SaaS loophole with GPLv3?
I am quite conflicted here. On one hand I understand the need for it (offsite colo servers is the best example). Basic level of evil maid resistance is also a nice to have on personal machines. On the other hand we have all the things you listed.
I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.
A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.
I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.
> On one hand I understand the need for it (offsite colo servers is the best example).
Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.
Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Android lets you put your own signed keys in on certain phones. For now.
The banking apps still won't trust them, though.
To add a quote from Lennart himself:
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
Your system will not belong to you anymore. Just as it is with Android.
> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.
I wish the myth of the spec would die at this point.
Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.
It's interesting how quickly the OSS movement went from "No, no, we just want to include companies in the Free Software Movement" to "Oh, don't worry, it's ok if companies with shareholders that are not accountable to the community have a complete monopoly on OSS, and decide what direction it takes"
FOSS was imagined as a brotherhood of hackers, sharing code back and forth to build a utopian code commons that provided freedom to build anything. It stayed firmly in the realm of the imaginary because, in the real world, everybody wants somebody else to foot the bill or do the work. Corporations stepped up once they figured out how to profit off of FOSS and everyone else was content to free ride off of the output because it meant they didn't have to lift a finger. The people who actually do the work are naturally in the driver's seat.
systemd solved/improved a bunch of things for linux, but now the plan seems to be to replace package management with image based whole dist a/b swaps. and to have signed unified kernel images.
this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is
im sure ill get many down votes, but despite some hyperbole this is the trajectory
> the plan seems to be to replace package management with image based whole dist a/b swaps
The plan is probably to have that as an alternative for the niche uses where that is appropriate.
This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.
Nobody says that you will not have alternatives. What people are saying, is that if you're using those alternatives you won't be able to watch videos online, or access your bank account.
Immutable, signed systems do not intrinsically conflict with hackability. See this blog post of Lennart's[0] and systemd's ParticleOS meta-distro[1].
I do agree that these technologies can be abused. But system integrity is also a prerequisite for security; it's not like this is like Digital "Rights" Management, where it's unequivocally a bad thing that only advances evil interests. Like, Widevine should never have been made a thing in Firefox imo.
So I think what's most productive here is to build immutable, signable systems that can preserve user freedom, and then use social and political means to further guarantee those freedoms. For instance a requirement that owning a device means being able to provision your own keys. Bans on certain attestation schemes. Etc. (I empathize with anyone who would be cynical about those particular possibilities though.)
Linux is nowadays mostly sponsored by big corporations. They have different goals and different ways to do things. Probably the first 10 years Linux was driven by enthusiasts and therefore it was a lean system. Something like systemd is typical corporate output. Due it its complexity it would have died long before finding adoption. But with enterprise money this is possible. Try to develop for the combo Linux Bluetooth/Audio/dbus: the complexity drives you crazy because all this stuff was made for (and financed by) corporate needs of the automotive industry. Simplicity is never a goal in these big companies.
But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...
> this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".
All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.
P.S: I don't care about the downvotes, you shouldn't either.
Does this guy do anything that is user-friendly and is as per open source ethos of freedom and user control? In all this shit-show of Microsoft shoving AI down the throat of its users, I was happy to be firmly in the Linux camp for many many years. And along come these kind of people to shit on that parade too.
P.S: Upvoted you. I don't care about downvotes either.
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
I'd love to meet up at FOSDEM. Email me at fredrik@mullvad.net.
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
Ah, good old remote attestation. Always works out brilliantly.
I have this fond memory of that Notary in Germany who did a remote attestation of me being with him in the same room, voting on a shareholder resolution.
While I was currently traveling on the other side of the planet.
This great concept that totally will not blow up the planet has been proudly brought to you by Ze Germans.
No matter what your intentions are: It WILL be abused and it WILL blow up. Stop this and do something useful.
[While systemd had been a nightmare for years, these days its actually pretty good, especially if you disable the "oh, and it can ALSO create perfect eggs benedict and make you a virgin again while booting up the system!" part of it. So, no bad feelings here. Also, I am German. Also: Insert list of history books here.]
What is the endgame here? Obviously "heightened security" in some kind of sense, but to what end and what mechanisms? What is the scope of the work? Is this work meant to secure forges and upstream development processes via more rigid identity verification, or package manager and userspace-level runtime restrictions like code signing? Will there be a push to integrate this work into distributions, organizations, or the kernel itself? Is hardware within the scope of this work, and to what degree?
The website itself is rather vague in its stated goals and mechanisms.
I suspect the endgame is confidential computing for distributed systems. If you are running high value workloads like LLMs in untrusted environments you need to verify integrity. Right now guaranteeing that the compute context hasn't been tampered with is still very hard to orchestrate.
No, the endgame is that a small handful of entities or a consortium will effectively "own" Linux because they'll be the only "trusted" systems. Welcome to locked-down "Linux".
You'll be free to run your own Linux, but don't expect it to work outside of niche uses.
Personally for me this is interesting because there needs to be a way where a hardware token providing an identity should interact with a device and software combination which would ensure no tampering between the user who owns the identity and the end result of computing is.
A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.
Entities other than me being able to control what runs on the device I physically posses is absolutely not acceptable in any way. Screw your clients, screw you shareholders and screw you.
Assuming you're using systemd, you already gave up control over your system. The road to hell was already paved. Now, you would have to go out of your way to retain control.
In the great scheme of things, this period where systemd was intentionally designed and developed and funded to hurt your autonomy but seemed temporarily innocuous will be a rounding error.
Nah man, yo are FUDing. systemd might have some poor design choices and arrogant maintainers, but at least I can drop it at any time and my bank wouldn't freak out about it. This one… It's a whole another level.
I think https://0pointer.net/blog/authenticated-boot-and-disk-encryp... is a much better explanation of the motivation behind this straight from the horse's mouth. It does a really good job of motivating the need for this in a way that explains why you as the end user would desire such features.
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
See Android; or, where you no longer own your device, and if the company decides, you no longer own your data or access to it.
I really hope this would be geared towards clients being able to verify the server state or just general server related usecases, instead of trying to replicate SafetyNet-style corporate dystopia on the desktop.
Probably obvious from the surnames but this is the first time I've seen a EU company pop up on Hacker News that could be mistaken for a Californian company. Nice to see that ambition.
I understand systemd is controversial, that can be debated endlessly but the executive team and engineering team look very competitive. Will be interesting to see where this goes.
Lennart will be involved with at least three events at FOSDEM on the coming weekend. The talks seem unrelated at first glance but maybe there will be an opportunity to learn more about his new endeavor.
Your computer will come with a signed operating system. If you modify the operating system, your computer will not boot. If you try to install a different operating system, your computer will not boot.
> If you try to install a different operating system, your computer will not boot.
That does not follow. That would only very specifically happen when all of these are true:
1. Secure Boot cannot be disabled
2. You cannot provision your own Secure Boot keys
3. Your desired operating system is not signed by the computer's trusted Secure Boot keys
"Starting in a verified state and stay[ing] trusted over time" sounds more like using measured boot. Which is basically its own thing and most certainly does not preclude booting whatever OS you choose.
Although if your comment was meant in a cynical way rather than approaching things technically, than I don't think my reply helps much.
Remote attestation requires a great deal of trust... I know this comment is likely to be down-voted, but I can't think of a Lennart Poettering project that didn't try to extend, centralize, and conglomerate Linux with disastrous results in the short term; and less innovation, flexibility, and functionality in the long term. Trading the strength of Unix systems for goal of making them more "Microsoft" like.
Remote attestation requires a great deal of trust, and I simply don't have it when it comes to this leadership team.
One of the most grating pain points of the early versions of systemd was a general lack of humility, some would say rank arrogance, displayed by the project lead and his orbiters. Today systemd is in a state of "not great, not terrible" but it was (and in some circles still is) notorious for breaking peoples' linux installs, their workflows, and generally just causing a lot of headaches. The systemd project leads responded mostly with Apple-style "you're holding it wrong" sneers.
It's not immediately clear to me what exactly Amutable will be implementing, but it smells a lot like some sort of DRM, and my immediate reaction is that this is something that Big Tech wants but that users don't.
My question is this: Has Lennart's attitude changed, or can linux users expect more of the same paternalism as some new technology is pushed on us whether we like it or not?
You won't believe how many hours we have lost troubleshooting SysV init and Upstart issues. systemd is so much better in every way, reliable parallel init with dependencies, proper handling of double forking, much easier to secure services (systemd-analyze security), proper timer handling (yay, no more cron), proper temporary file/directory handling, centralized logs, etc.
It improves on about every level compared to what came before. And no, nothing is perfect and you sometimes have to troubleshoot it.
Why on earth would somebody make a company with one of the the most reviled programmers on earth? Everyone knows that everything he touches turns to shit.
I thought it was how to plug the user freedom hole. Profits are leaking because users can leave the slop ecosystem and install something that respects their freedom. It's been solved on mobile devices and it needs to be solved for desktops.
All vague hand waving at this point and not much to talk about. We'll have to wait and see what they deliver, how it works and the business model to judge how useful it will be.
What might you call a sort of Dunbar's number that counts not social links, but rather the number of things to which a person must actively refuse consent to?
Immutability means you can't touch or change some parts of the system without great effort (e.g. macOS SIP).
Atomicity means you can track every change, and every change is so small that it affects only one thing and can be traced, replayed or rolled back. Like it's going from A to B and being able to return back to A (or going to B again) in a determinate manner.
You. The money quote about the current state of Linux security:
> In fact, right now, your data is probably more secure if stored on current ChromeOS, Android, Windows or MacOS devices, than it is on typical Linux distributions.
Say what you want about systemd the project but they're the only ones moving foundational Linux security forward, no one else even has the ambition to try. The hardening tools they've brought to Linux are so far ahead of everything else it's not even funny.
Just an assumption here, but the project appears to be about the methodology to verify the install. Who holds the keys is an entirely different matter.
I'm sure this company is more focused on the enterprise angle, but I wonder if the buildout of support for remote attestation could eventually resolve the Linux gaming vs. anti-cheat stalemate. At least for those willing to use a "blessed" kernel provided by Valve or whoever.
rust-vmm-based environment that verifies/authenticates an image before running ? Immutable VM (no FS, root dropper after setting up network, no or curated device), 'micro'-vm based on systemd ? vmm captures running kernel code/memory mapping before handing off to userland, checks periodically it hasn't changed ? Anything else on the state of the art of immutable/integrity-checking of VMs?
I see the use case for servers targeted by malicious actors. A penetration test on an hardened system with secure boot and binary verification would be much harder.
For individuals, IMO the risk mostly come from software they want to run (install script or supply chain attack). So if the end user is in control of what gets signed, I don't see much benefit. Unless you force users to use an app store...
The immediate concern seeing this is will the maintainer of systemd use their position to push this on everyone through it like every other extended feature of systemd?
Whatever it is, I hope it doesn't go the usual path of a minimal support, optional support and then being virtually mandatory by means of tight coupling with other subsystems.
Daan here, founding engineer and systemd maintainer.
So we try to make every new feature that might be disruptive optional in systemd and opt-in. Of course we don't always succeed and there will always be differences in opinion.
Also, we're a team of people that started in open source and have done open source for most of our careers. We definitely don't intend to change that at all. Keeping systemd a healthy project will certainly always stay important for me.
Thanks for the answer. Let me ask you something close with a more blunt angle:
Considering most of the tech is already present and shipping in the current systemd, what prevents our systems to become a immutable monolith like macOS or current Android with the flick of a switch?
Or a more grave scenario: What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle, hence every Linux distribution has to go through Microsoft's blessing to be bootable?
If you were not a systemd maintainer and have started this project/company independently targeting systemd, you would have to go through the same process as everyone and I would have expected the systemd maintainers to, look at it objectively and review with healthy skepticism before accepting it. But we cannot rely on that basic checks and balances anymore and that's the most worrying part.
> that might be disruptive optional in systemd
> we don't always succeed and there will always be differences in opinion.
You (including other maintainers) are still the final arbitrator of what's disruptive. The differences of opinion in the past have mostly been settled as "deal with it" and that's the basis of current skepticism.
Frankly this disgusts me. While there are technically user-empowering ways this can be used, by far the most prevalent use will be to lock users/customers out of true ownership of their own devices.
Device attestation fails? No streaming video or audio for you (you obvious pirate!).
Device attestation fails? No online gaming for you (you obvious cheater!).
Device attestation fails? No banking for you (you obvious fraudster!).
Device attestation fails? No internet access for you (you obvious dissident!).
Sure, there are some good uses of this, and those good uses will happen, but this sort of tech will be overwhelmingly used for bad.
I just want more trustworthy systems. This particular concept of combining reproducible builds, remote attestation and transparency logs is something I came up with in 2018. My colleagues and I started working on it, took a detour into hardware (tillitis.se) and kind of got stuck on the transparency part (sigsum.org, transparency.dev, witness-network.org).
Then we discovered snapshot.debian.org wasn't feeling well, so that was another (important) detour.
Part of me wish we had focused more on getting System Transparency in its entirety in production at Mullvad. On the other hand I certainly don't regret us creating Tillitis TKey, Sigsum, taking care of Debian Snapshot service, and several other things.
Now, six years later, systemd and other projects have gotten a long way to building several of the things we need for ST. It doesn't make sense to do double work, so I want to seize the moment and make sure we coordinate.
Trusted computing and remote attestation is like two people who want to have sex requiring clean STD tests first. Either party can refuse and thus no sex will happen. A bank trusting a random rooted smartphone is like having sex with a prostitute with no condom. The anti-attestation position is essentially "I have a right to connect to your service with an unverified system, and refusing me is oppression." Translate that to the STD context and it sounds absurd - "I have a right to have sex with you without testing, and requiring tests violates my bodily autonomy."
You're free to root your phone. You're free to run whatever you want. You're just not entitled to have third parties trust that device with their systems and money. Same as you're free to decline STD testing - you just don't get to then demand unprotected sex from partners who require it.
You are trying to portrait it as an exchange between equal parties which it isn't. I am totally entitled not to have to use a thrid-party-controlled device to access government services. Or my bank account.
remote attestation is just fancy digital signatures with hardware protected secret keys. Are you freaking out about digital signatures used anywhere else?
I always wondered how this works in practice for "real time" use cases because we've seen with secure boot + tpm that we can attest that the boot was genuine at some point in the past, what about modifications that can happen after that?
As per the announcement, we’ll be building this over the next months and sharing more information as this rolls out. Much of the fundamentals can be extracted from Lennart’s posts and the talks from All Systems Go! over the last years.
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
I'm not seeing any big problems with the portraits.
Having said that, should this company not be successful, Mr Zbyszek Jędrzejewski-Szmek has potentially a glowing career as an artists' model. Think Rembrandt sketches.
I look forward to something like ChromeOS that you can just install on any old refurbished laptop. But I think the money is in servers.
Are you guys hiring? I can emulate a grim smile and have no problem being diabolical if the pay is decent so maybe I am a good fit?
I can also pet goats
this is very interesting... been watching the work around bootc coupling with composefs + dm_verity + signed UKI, I'm wondering if this will build upon that.
So much negativity in this thread. I actually think this could be useful, because tamper-proof computer systems are useful to prevent evil maid attacks. Especially in the age of Pegasus and other spyware, we should also take physical attack vectors into account.
I can relate to people being rather hostile to the idea of boot verification, because this is a process that is really low level and also something that we as computer experts rarely interact with more deeply. The most challenging part of installing a Linux system is always installing the boot loader, potentially setting up an UEFI partition. These are things that I don't do everyday and that I don't have deep knowledge in. And if things go wrong, then it is extra hard to fix things. Secure boot makes it even harder to understand what is going on. There is a general lack of knowledge of what is happening behind the scenes and it is really hard to learn about it. I feel that the people behind this project should really keep XKCD 2501 in mind when talking to their fellow computer experts.
it won’t matter if you disable it. You simply won’t be able to use your PC with any commercial services, in the same way that a rooted android installation can’t run banking apps without doing things to break that, and what they’re working on here aims to make that “breakage“ impossible.
I can see like a 100 ways this can make computing worse for 99% people and like 1-2 scenarios where it might actually be useful.
Like if the politicians pushing for chat control/on device scanning of data come knocking again and actually go through (they can try infinitely) tech like this will really be "useful". Oops your device cannot produce a valid attestation, no internet for you.
Hmph, AFAIK systemd has been struggling with TPM stuff for a while (much longer than I anticipated). It’s kinda understandable that the founder of systemd is joining this attestation business, because attestation ultimately requires far more than a stable OS platform plus an attestation module.
A reliably attestable system has to nail the entire boot chain: BIOS/firmware, bootloader, kernel/initramfs pairs, the `init` process, and the system configuration. Flip a single bit anywhere along the process, and your equipment is now a brick.
Getting all of this right requires deep system knowledge, plus a lot of hair-pulling adjustment, assuming if you still have hair left.
I think this part of Linux has been underrated. TPM is a powerful platform that is universally available, and Linux is the perfect OS to fully utilize it. The need for trust in digital realm will only increase. Who knows, it may even integrate with cryptocurrency or even social platforms. I really wish them a good luck.
The typical HN rage-posting about DRM aside, there's no reason that remote attestation can't be used in the opposite direction: to assert that a server is running only the exact code stack it claims to be, avoiding backdoors. This can even be used with fully open-source software, creating an opportunity for OSS cloud-hosted services which can guarantee that the OSS and the build running on the server match. This is a really cool opportunity for privacy advocates if leveraged correctly - the idea could be used to build something like Apple's Private Cloud Compute but even more open.
Like evil maid attacks, this is a vanishingly rare scenario brought out to try to justify technology that will overwhelmingly be used to restrict computing freedom.
In addition, the benefit is a bit ridiculous, like that of DRM itself. Even if it worked, literally your "trusted software" is going to be running in an office full of the most advanced crackers money can buy, and with all the incentive to exploit your schema but not publish the fact that they did. The attack surface of the entire thing is so large it boggles the mind that there are people who believe on the "secure computing cloud" scenario.
WHAT is the usage and benefit for private users? This is always neglected.
avoiding backdoors as a private person you always can only solve with having the hardware at your place, because hardware ALWAYS can have backdoors, because hardware vendors do not fix their shit.
From my point of view it ONLY gives control and possibilities to large organizations like governments and companies. which in turn use it to control citizens
> but considering Windows requirements drive the PC spec, this capability can be used to force Linux distributions in bad ways
What do you mean by this?
Is the concern that systemd is suddenly going to require that users enable some kind of attestation functionality? That making attestation possible or easier is going to cause third parties to start requiring it for client machines running Linux? This doesn't even really seem to be a goal; there's not really money to be made there.
As far as I can tell the sales pitch here is literally "we make it so you can assure the machines running in your datacenter are doing what they say they are," which seems pretty nice to me, and the perversions of this to erode user rights are either just as likely as they ever were or incredibly strange edge cases.
> there's no reason that remote attestation can't be used in the opposite direction
There is: corporate will fund this project and enforce its usage for their users not for the sake of the users and not for the sake of doing any good.
What it will be used for is to bring you a walled garden into Linux and then slowly incentivize all software vendors to only support that variety of Linux.
LP has a vast, vast experience in locking down users' freedom and locking down Linux.
> There is: corporate will fund this project and enforce its usage for their users not for the sake of the users and not for the sake of doing any good.
I'd really love to see this scenario actually explained. The only place I could really see client-side desktop Linux remote attestation gaining any foothold is to satisfy anti-cheat for gaming, which might actually be a win in many ways.
> What it will be used for is to bring you a walled garden into Linux and then slowly incentivize all software vendors to only support that variety of Linux.
What walled garden? Where is the wall? Who owns the garden? What is the actual concrete scenario here?
> LP has a vast, vast experience in locking down users' freedom and locking down Linux.
What? You can still use all of the Linuxes you used to use? systemd is open source, open-application, and generally useful?
Like, I guess I could twist my brain into a vision where each Ubuntu release becomes an immutable rootfs.img and everyone installs overlays over the top of that, and maybe there's a way to attest that you left the integrity protection on, but I don't really see where this goes past that. There's no incentive to keep you from turning the integrity protection off (and no means to do so on PC hardware), and the issues in Android-land with "typical" vendors wanting attestation to interact with you are going to have to come to MacOS and Windows years before they'll look at Linux.
The idea is that by protecting boot path you build a platform from which you can attest the content of the application. The goal here is usually that a cloud provider can say “this cryptographic material confirms that we are running the application you sent us and nothing else” or “the cloud application you logged in to matched the one that was audited 1:1 on disk.”
"We are confident we have a very robust path to revenue."
I take it that you are not at this stage able to provide details of the nature of the path to revenue. On what kind of timescale do you envisage being able to disclose your revenue stream/subscribers/investors?
Appreciate the clarification, but this actually raises more questions than it answers.
A "robust path to revenue" plus a Linux-based OS and a strong emphasis on EU / German positioning immediately triggers some concern. We've seen this pattern before: wrap a commercially motivated control layer in the language of sovereignty, security, or European tech independence, and hope that policymakers, enterprises, and users don't look too closely at the tradeoffs.
Europe absolutely needs stronger participation in foundational tech, but that shouldn't mean recreating the same centralized trust and control models that already failed elsewhere, just with an EU flag on top. 'European sovereignty' is not inherently better if it still results in third-party gatekeepers deciding what hardware, kernels, or systems are "trusted."
Given Europe's history with regulation-heavy, vendor-driven solutions, it's fair to ask:
Who ultimately controls the trust roots?
Who decides policy when commercial or political pressure appears?
What happens when user interests diverge from business or state interests?
Linux succeeded precisely because it avoided these dynamics. Attestation mechanisms that are tightly coupled to revenue models and geopolitical branding risk undermining that success, regardless of whether the company is based in Silicon Valley or Berlin.
Hopefully this is genuinely about user-verifiable security and not another marketing-driven attempt to position control as sovereignty. Healthy skepticism seems warranted until the governance and trust model are made very explicit.
This is relevant. Every project he's worked on has been a dumpster fire. systemd sucks. PulseAudio sucks. GNOME sucks. Must the GP list out all the ways in which they suck to make it a more objective attack?
People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
Imagine you're using a program hosted on some cloud service S. You send packets over the network; gears churn; you get some results back. What are the problems with such a service? You have no idea what S is doing with your data. You incur latency, transmission time, and complexity costs using S remotely. You pay, one way or another, for the infrastructure running S. You can't use S offline.
Now imagine instead of S running on somebody else's computer over a network, you run S on your computer instead. Now, you can interact with S with zero latency, don't have to pay for S's infrastructure, and you can supervise S's interaction with the outside world.
But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
However --- if S's author could run S on your computer in such a way that he could prove you haven't tampered with S or haven't observed its secrets, he can let you run S on your computer without giving up control over S. Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
Sure, in this deployment model, just like in the cloud world, you wouldn't be able to run a custom S: but so what? You don't get to run your custom S either way, and this way, relative to cloud deployment, you get better performance and even a little bit more control.
Also, the same thing works in reverse. You get to run your code remotely in a such a way that you can trust its remote execution just as much as you can trust that code executing on your own machine. There are tons of applications for this capability that we're not even imagining because, since the dawn of time, we've equated locality with trust and can now, in principle, decouple the two.
Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
> People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
But it won't be used like that. It will be used to take user freedoms out.
> But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
That use case you're describing is already there and is currently being done with DRM, either in browser or in app itself.
You are right in the "it will make easier for app user to do it", and in theory it is still better option in video games than kernel anti-cheat. But it is still limiting user freedoms.
> Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
Majority of the uses will be user-hostile things. Because those are only cases where someone will decide to fund it.
> Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
To be honest, mainly companies need that. personal users do not need that. And additionally companies are NOT restrained by governments not to exploit customers as much as possible.
So... i also see it as enslaving users. And tell me, for many private persons, where does this actually give them for PRIVATE persons, NOT companies a net benefit?
> This potential shouldn't prevent our inventing new kinds of tool.
Why do i see someone who wants to build an atomic bomb for shit and giggles using this argument, too? As hyperbole as my argument is, the argument given is not good here, as well.
The immutable linux people build tools, without building good tools which actually make it easier for private people at home to adapt a immutable linux to THEIR liking.
I will put some trust into these people if they make this a pure nonprofit organization at the minimum. Building ON measures to ensure that this will not be pushed for the most obvious cases, which is to fight user freedom. This shouldn't be some afterthought.
"Trust us" is never a good idea with profit seeking founders. Especially ones who come from a culture that generally hates the hacker spirit and general computing.
You basically wrote a whole narrative of things that could be. But the team is not even willing to make promises as big as yours. Their answers were essentially just "trust us we're cool guys" and "don't worry, money will work out" wrapped in average PR speak.
I'm guessing you're referencing my comment, that isn't what I said.
> But the team is not even willing to make promises as big as yours.
Be honest, look at the comment threads for this announcement. Do you honestly think a promise alone would be sufficient to satisfy all of the clamouring voices?
No, people would (rightfully!) ask for more and more proof -- the best proof is going to be to continue building what we are building and then you can judge it on its merits. There are lots of justifiable concerns people have in this area but most either don't really apply what we are building or are much larger social problems that we really are not in a position to affect.
I would also prefer to be to judged based my actions not on wild speculation about what I might theoretically do in the future.
Shall it be backdoorable like systemd-enabled distro nearly had a backdoorable SSH? For non-systemd distro weren't affected.
Why should we trust microsofties to produce something secure and non-backdoored?
And, lastly, why should Linux's security be tied to a private company? Oooh, but it's of course not about security: it's about things like DRM.
I hope Linus doesn't get blinded here: systemd managed to get PID 1 on many distros but they thankfully didn't manage, yet, to control the kernel. I hope this project ain't the final straw to finally meddle into the kernel.
Currently I'm doing:
Proxmox / systemd-less VMs / containers
But Promox is Debian based and Debian really drank too much of the systemd koolaid.
So my plan is:
FreeBSD / bhyve hypervisor / systemd-less Linux VMs / containers
And then I'll be, at long last, systemd-free again.
This project is an attack on general-purpose computing.
Cheating was solved before any of this rootkit level malware horseshit.
Community ran servers with community administration who actually cared about showing up and removing bad actors and cheaters.
Plenty of communities are still demonstrating this exact fact today.
Companies could 100% recreate this solution with fully hosted servers, with an actually staffed moderation department, but that slightly reduces profit margins so fuck you. Keep in mind community servers ran on donations most of the time. That's the level of profit they would lose.
Companies completely removed community servers as an option instead, because allowing you to run your own servers means you could possibly play the game with skins you haven't paid for!!! Oh no!!! Getting enjoyment without paying for it!!!
All software attempts at anti-cheat are impossible. Even fully attested consoles have had cheats and other ways of getting an advantage that you shouldn't have.
Cheating isn't defined by software. Cheating is a social problem that can only be solved socially. The status quo 20 years ago was better.
Everyday the world is becoming more polarized. Technology corporations gain ever more control over people's lives, telling people what they can do on their computers and phones, what they can talk about on social platforms, censoring what they please, wielding the threat of being cutoff from their data, their social circles on a whim. All over the world, in dictatorships and also in democratic countries, governments turn more fascist and more violent. They demonstrate that they can use technology to oppress their population, to hunt dissent and to efficiently spread propaganda.
In that world, authoring technology that enables this even more is either completely mad or evil. To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom, acceptance of risk. If you build software that actively intends to take this away from me to put it into the hands of economic interests and political actors then you deserve all the hate you can get.
> To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom ...
I use Linux since the Slackware day. Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
This is not about security for the users. It's about control.
At least many in this thread are criticizing the project.
And, once again of course, it's from a private company.
Full of ex-Microsofties.
I don't know why anyone interested in hacking would cheer for this. But then maybe HN should be renamed "CN" (Corporate News) or "MN" (Microsoft News).
> Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
agreed, and now he's planning on controlling what remains of your machine cryptographically!
Hi, Chris here, CEO @ Amutable. We are very excited about this. Happy to answer questions.
Well I was wondering when the war on general computing and computer ownership would be carried into the heart of the open source ecosystems.
Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.
This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.
I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.
The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
19 replies →
> if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services
They would only do it to assert more control over you and in Netflix's case, force more ads on you.
It is why I never use any company's apps.
If they make it a requirement, I will just close my account.
The bank thing is a smoke screen.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
[0] https://globalnews.ca/news/11026906/music-industry-limewire-...
2 replies →
Yeah, as I am reading the landing page, the direction seems clear. It sucks, because as an individual there is not much one can do, and there is no consensus that it is a bad thing ( and even if there was, how to counter it ). Honestly, there are times I feel lucky to be as dumb as I am. At least I don't have the same responsibility for my output as people who create foundational tech and code.
Yup
Poettering is a well-known Linux saboteur, along with Red Hat.Without RH pushing his trash, he is not really that big of a threat.
Just like de Icaza, another saboteur, ran off to MS. That is the tell-tell sign for people not convinced that either person's work in FOSS existed to cause damage.
No, this is not a snarky, sarcastic comment. Trust Amutable at your own peril.
My tinfoil hat theory is devices like HDDs will be locked and only work on "attested" systems that actively monitor the files. This will be pushed by the media industry to combat piracy. Then opened up for para-law enforcement like palantir.
Then gpu and cpu makers will hop on and lock their devices to promote paid Linux like redhat. Or offering "premium support" to unlock your gpu for Linux for a monthly fee.
They'll say "if you are a Linux enthusiast then go tinker with arm and risc on an SD card"
> [T]he war on general computing and computer ownership [...] It is exhausting to see the hatred some have for people just owning their hardware.
The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.
This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.
You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
> You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.
Once it's out there and normalized, the individual engineers don't get to control how it is used. They never do.
1 reply →
You want PCIe-6? Cool well that only runs on Asus G-series with AI, and is locked to attested devices because the performance is so high that bad code can literally destroy it. So for safety, we only run trusted drivers and because they must be signed, you have to use Redhat Premium at a monthly cost of $129. But you get automatic updates.
2 replies →
System integrity also ends at the border of the system. The entire ecosystem of ATM skimmers demonstrates this-- the software and hardware are still 100% sanctioned, they're just hidden beneath a shim in the card slot and a stick-on keypad module.
I generally agree with the concept of "if you want me to use a pre-approved terminal, you supply it." I'd think this opens up a world of better possibilities. Right now, the app-centric bank/media company/whatever has to build apps that are compatible with 82 bazillion different devices, and then deal with the attestation tech support issues. Conversely, if they provide a custom terminal, it might only need to deal with a handful of devices, and they could design it to function optimally for the single use case.
> At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.
The problem is that there are powerful corporate and government interests who would love nothing more than to prevent users from controlling the keys for their own computers, and they can make their dream come true simply by passing a law.
It may be the case that certain users want to ensure that their computers are only running their code. But the same technologies can also used to ensure that their computers are only running someone else's code, locking users out from their own devices.
3 replies →
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.
They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.
This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.
Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.
I tend to buy such things with cash, in person.
People dislike cash for some strange reason, then complain about tracking. People also hand out their mobile number like candy. Same issue.
> People dislike cash for some strange reason
In my case it is because I would never have the right amount with me, in the right denominations. Google Pay always has this covered.
Also you need to remember to take one more thing with you, and refill it occasionally. As opposed to fuel, you do not know how much you will need when.
It can get lost or destroyed, and is not (usually) replaceable.
I am French, currently in the US. I need to change 100 USD in small denominations, I will need to go to the bank, and they will hopefully do that for me. Or not. Or not without some official paper from someone.
Ah yes, and I am in the US and the Euro is not an accepted currency here. So I need to take my 100 € to a bank and hope I can get 119.39 USD. In the right denominations.
What will I do with the 34.78 USD left when I am back home? I have a chest of money from all over the world. I showed it once to my kids when they were young, told a bit about the world and then forgot about it.
Money also weights quite a lot. And when it does not weights it gets lost or thrown away with some other papers. Except if they are neatly folded in a wallet, which I will forget.
I do not care about being traced when going to the supermarket. If I need to do untraceable stuff I will get money from teh ATM. Ah crap, they will trace me there.
So the only solution is to get my salary in cash, whihc is forbidden in France. Or take some small amounts from time to time. Which I will forget, and I have better things to do.
Cash sucks.
Sure, if we go cashless and terrible things happen (cyberwar, solar flare, software issues) then we are screwed. But either the situation unscrews itself, or we will have much, much, much bigger issues than money -- we will need to go full survival mode, apocalypse movies-style.
Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
> Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.
AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.
5 replies →
But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.
2 replies →
I’m not sure I understand the threat model for this. Why would I need to worry about my enclave being identifiable? Or is this a business use case?
Or why buy used devices if this is a risk?
It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.
Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.
Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.
Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.
3 replies →
For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage
Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.
It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing
1 reply →
I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.
1 reply →
At this point these are just English sentences. I am not worried about this threat model at all.
This seems like the kind of technology that could make the problem described in https://www.gnu.org/philosophy/can-you-trust.en.html a lot worse. Do you have any plans for making sure it doesn't get used for that?
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Governments demand compliance - exceptions appear.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural: Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
2 replies →
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
So, what’s the point and what’s the value?
9 replies →
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
13 replies →
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
2 replies →
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
it will be railroaded through in the same way that systemD was railroaded onto us.
2 replies →
What was it that the Google founders said about not adding advertisements to Google search?
> The models we have in mind for attestation are very much based on users having full control of their keys.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
> The models we have in mind for attestation are very much based on users having full control of their keys.
FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.
Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.
This WILL be used to infringe on individual freedoms.
The only question is WHEN? And your answer to that appears to be 'Not for the time being'.
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
Can I build my own kernel and still use software that wants attestation?
3 replies →
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
The road to hell is paved with good intentions.
That's not the intention, but how do you stop it from being the effect?
Glad to hear it! I am not surprised given the names and the fact you're at FOSDEM.
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
2 replies →
What engineering discipline?
PE or EIT?
You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.
half of the founders of this thing come from Microsoft. I suppose this makes the answer to your question obvious.
My thoughts exactly. We're probably witnessing the beginning of the end of linux users being able to run their own kernels. Soon:
- your bank won't let you log in from an "insecure" device.
- you won't be able to play videos on an "insecure" device.
- you won't be able to play video games on an "insecure" device.
And so on, and so forth.
34 replies →
that's a silver lining
the anti-user attestation will at least be full of security holes, and likely won't work at all
83 replies →
"At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus."
Please don't bring attestation to common Linux distributions. This technology, by essence, moves trust to a third party distinct of the user. I don't see how it can be useful in any way to end users like most of us here. Its use by corporations has already caused too much damage and exclusion in the mobile landscape, and I don't want folks like us becoming pariahs in our own world, just because we want machines we bought to be ours...
A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.
All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.
> A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.
This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.
If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.
2 replies →
Attestation is a critical feature for many H/W companies (e.g. IoT, robotics), and they struggle with finding security engineers who expertise in this area (disclaimer: I used to work as a operating system engineer + security engineer). Many distros are not only designed for desktop users, but also for industrial uses. If distros ship standardized packages in this area, it would help those companies a lot.
This is the problem with Linux in general. It's way too much infiltrated by our adversaries from big tech industry.
Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.
This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)
6 replies →
> Attestation is a critical feature for many H/W companies
Like John Deere. Read about how they use that sort of thing
IoT and robotics should (dare I say "must"?) not use general-purpose OSes at all.
This «Linux have a finger in every pie» attitude is very harmful for industry, IMHO.
6 replies →
I'm not too big in this field but didn't many of those same IOT companies and the like struggle with the packages becoming dependent on Poeterings work since they often needed much smaller/minimal distros?
4 replies →
Then they can go and buy some other OS like VxWorks.
It is already part of the most common Linux distribution, Android.
Please do, I disagree with this commenter.
You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.
I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.
Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.
Whoever uses this seeks to ensure a certain kind of behavior on a machine they typically don't own (in the legal sense of it). So of course you can make it optional. But then software that depends on it, like your banking Electron app or your Steam game, will refuse to run... so as the user, you don't really have a choice.
I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...
see latest "MS just divilged disk encryption keys to govt" news to see why this is a horrid idea
5 replies →
It could be an open source developer yes but in practice it's always the big tech companies. Look at how this evolved in mobile phones.
It's also because content companies and banks want other people in suits to trust.
[dead]
My only experience with Linux secure boot so far.... I wasn't even aware that it was secure booted. And I needed to run something (I think it was the Displaylink driver) that needs to jam itself into the kernel. And the convoluted process to do it failed (it's packaged for Ubuntu but I was installing it on a slightly outdated Fedora system).
What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.
So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.
Secure Boot only extends the chain of trust from your firmware down the first UEFI binary it loads.
Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.
Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.
It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.
here is some actual security: encrypted /boot, encrypted everything other than the boot loader (grub in this case)
sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)
so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing
so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots
https://wiki.archlinux.org/title/GRUB#Encrypted_/boot
the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech
however, this is still better than, at failure, letting anything run
sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level
6 replies →
Yes, "just as secure as proprietary OSes" who due to failed signature verification are no longer able to start notepad.exe.
I think you might want to go re-read the last ~6 months of IT news in regards of "secure proprietary OSes".
7 replies →
There is the integrity measurement architecture but it isn't very mature in my opinion. Even secureboot and module signing is a manual setup by users, it isn't supported by default, or by installers. You have to more or less manage your own certs and CA, although I did notice some laptops have debian signing keys in UEFI by default? If only the debian installer setup module signing.
But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?
I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.
It is not useless. I'm using UKI, so initrd is built into the kernel binary and signed. I'm not using bootloader, so UEFI checks my kernel signature. My userspace is encrypted and key is stored in TPM, so the whole boot chain is verified.
you can merge the initrd + kernel into one signed binary pretty easily with systemd-boot
add luks root, then it's not that bad
2 replies →
Isn’t the idea that the kernel will verify anything beneath it. Secure boot verifies the kernel and then it’s in the hands of the kernel to keep verifying or not.
2 replies →
A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.
Code signature verification is an interesting idea, but I'm not sure how it could be achieved. Have distro maintainers sign the code?
2 replies →
Isn't it possible to force TPM measurements for stuff like the kernel command line or initramfs hash to match in order to decrypt the rootfs? Or make things simpler with UKIs?
Most of the firmwares I've used lately seem to allow adding custom secureboot keys.
1 reply →
There is some level of misinformation in your post. Both Windows and Linux check driver signatures. Once you boot Linux in UEFI Secure Boot, you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode. You have to sign all of the drivers within the same PKI of your UEFI key.
1 reply →
Remote attestation is another technology that is not inherently restrictive of software freedom. But here are some examples of technologies that have already restricted freedom due to oligopoly combined with network effects:
* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)
* HDMI/HDCP
* streaming DRM (Widevine / FairPlay)
* Secure Boot (vendor-keyed deployments)
* printers w/ signed/chipped cartridges (consumables auth)
* proprietary file formats + network effects (office docs, messaging)
It very clearly is restrictive of software freedom. I've never suffered from an evil maid breaking into my house to access my computer, but I've _very_ frequently suffered from corporations trying to prevent me from doing what I wish with my own things. We need to push back on this notion that this sort of thing was _ever_ for the end-user's benefit, because it's not.
Remote attestation seems more useful for server hosts to let VPS users verify the server hasn’t been tampered with.
YOU can use remote attestation to verify a remote server you are paying for hasn't been tampered with.
2 replies →
To play devil's advocate, I don't think most people would be fine with their car ramming into a military base after an unfriendly firmware update.
However, I agree that the risks to individuals and their freedoms stemming from these technologies outweigh the benefits in most cases.
5 replies →
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
There is. Signal uses it, for example. https://signal.org/blog/building-faster-oram/
For another example, IntegriCloud: https://secure.integricloud.com/
confidential computing?
The authors clearly don’t intend this to happen but that doesn’t matter. Someone else will do it. Maybe this can be stopped with licensing as we tried to stop the SaaS loophole with GPLv3?
I am quite conflicted here. On one hand I understand the need for it (offsite colo servers is the best example). Basic level of evil maid resistance is also a nice to have on personal machines. On the other hand we have all the things you listed.
I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.
A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.
I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.
> On one hand I understand the need for it (offsite colo servers is the best example).
Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.
On the other side, Mulvad is looking at remote attestation so that the users can verify their servers: https://news.ycombinator.com/item?id=29903695
> * Secure Boot (vendor-keyed deployments)
I wish this myth would die at this point.
Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Android lets you put your own signed keys in on certain phones. For now.
The banking apps still won't trust them, though.
To add a quote from Lennart himself:
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
Your system will not belong to you anymore. Just as it is with Android.
3 replies →
> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.
Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.
3 replies →
> Secure Boot allows you to enroll your own keys
UEFI secure boot on PCs, yes for the most part. A lot of mobile platforms just never supported this. It's not a myth.
4 replies →
What about all those Windows on ARM laptops?
I wish the myth of the spec would die at this point.
Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.
Well, I can see what heinous thing is going to be ruining my day in 5 years.
Attestation, the thing we're going to be spending the next forever trying to get out of phones, now in your kernel.
It's interesting how quickly the OSS movement went from "No, no, we just want to include companies in the Free Software Movement" to "Oh, don't worry, it's ok if companies with shareholders that are not accountable to the community have a complete monopoly on OSS, and decide what direction it takes"
FOSS was imagined as a brotherhood of hackers, sharing code back and forth to build a utopian code commons that provided freedom to build anything. It stayed firmly in the realm of the imaginary because, in the real world, everybody wants somebody else to foot the bill or do the work. Corporations stepped up once they figured out how to profit off of FOSS and everyone else was content to free ride off of the output because it meant they didn't have to lift a finger. The people who actually do the work are naturally in the driver's seat.
1 reply →
systemd solved/improved a bunch of things for linux, but now the plan seems to be to replace package management with image based whole dist a/b swaps. and to have signed unified kernel images.
this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is
im sure ill get many down votes, but despite some hyperbole this is the trajectory
We warned you that systemd was just the beginning.
> the plan seems to be to replace package management with image based whole dist a/b swaps
The plan is probably to have that as an alternative for the niche uses where that is appropriate.
This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.
Nobody says that you will not have alternatives. What people are saying, is that if you're using those alternatives you won't be able to watch videos online, or access your bank account.
Eventually you will not be able to block ads.
1 reply →
Immutable, signed systems do not intrinsically conflict with hackability. See this blog post of Lennart's[0] and systemd's ParticleOS meta-distro[1].
I do agree that these technologies can be abused. But system integrity is also a prerequisite for security; it's not like this is like Digital "Rights" Management, where it's unequivocally a bad thing that only advances evil interests. Like, Widevine should never have been made a thing in Firefox imo.
So I think what's most productive here is to build immutable, signable systems that can preserve user freedom, and then use social and political means to further guarantee those freedoms. For instance a requirement that owning a device means being able to provision your own keys. Bans on certain attestation schemes. Etc. (I empathize with anyone who would be cynical about those particular possibilities though.)
[0] https://0pointer.net/blog/fitting-everything-together.html
[1] https://github.com/systemd/particleos
Linux is nowadays mostly sponsored by big corporations. They have different goals and different ways to do things. Probably the first 10 years Linux was driven by enthusiasts and therefore it was a lean system. Something like systemd is typical corporate output. Due it its complexity it would have died long before finding adoption. But with enterprise money this is possible. Try to develop for the combo Linux Bluetooth/Audio/dbus: the complexity drives you crazy because all this stuff was made for (and financed by) corporate needs of the automotive industry. Simplicity is never a goal in these big companies.
But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...
> this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id
Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".
All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.
P.S: I don't care about the downvotes, you shouldn't either.
Does this guy do anything that is user-friendly and is as per open source ethos of freedom and user control? In all this shit-show of Microsoft shoving AI down the throat of its users, I was happy to be firmly in the Linux camp for many many years. And along come these kind of people to shit on that parade too.
P.S: Upvoted you. I don't care about downvotes either.
Exciting!
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
I'd love to meet up at FOSDEM. Email me at fredrik@mullvad.net.
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Hi, I'm David, founding product lead.
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
*: https://mullvad.net/en/blog/system-transparency-future
**: https://witness-network.org
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
Ah, good old remote attestation. Always works out brilliantly.
I have this fond memory of that Notary in Germany who did a remote attestation of me being with him in the same room, voting on a shareholder resolution.
While I was currently traveling on the other side of the planet.
This great concept that totally will not blow up the planet has been proudly brought to you by Ze Germans.
No matter what your intentions are: It WILL be abused and it WILL blow up. Stop this and do something useful.
[While systemd had been a nightmare for years, these days its actually pretty good, especially if you disable the "oh, and it can ALSO create perfect eggs benedict and make you a virgin again while booting up the system!" part of it. So, no bad feelings here. Also, I am German. Also: Insert list of history books here.]
no no, let him get distracted by it, the one thing that happened after he got bored with pulseaudio is that pulseaudio started being better.
What is the endgame here? Obviously "heightened security" in some kind of sense, but to what end and what mechanisms? What is the scope of the work? Is this work meant to secure forges and upstream development processes via more rigid identity verification, or package manager and userspace-level runtime restrictions like code signing? Will there be a push to integrate this work into distributions, organizations, or the kernel itself? Is hardware within the scope of this work, and to what degree?
The website itself is rather vague in its stated goals and mechanisms.
I suspect the endgame is confidential computing for distributed systems. If you are running high value workloads like LLMs in untrusted environments you need to verify integrity. Right now guaranteeing that the compute context hasn't been tampered with is still very hard to orchestrate.
That endgame has so far been quite unreachable. TEE.fail is the latest in a long sequence of "whoever touches the hardware can still attack you".
https://arstechnica.com/security/2025/09/intel-and-amd-trust...
No, the endgame is that a small handful of entities or a consortium will effectively "own" Linux because they'll be the only "trusted" systems. Welcome to locked-down "Linux".
You'll be free to run your own Linux, but don't expect it to work outside of niche uses.
Personally for me this is interesting because there needs to be a way where a hardware token providing an identity should interact with a device and software combination which would ensure no tampering between the user who owns the identity and the end result of computing is.
A concrete example of that is electronic ballots, which is a topic I often bump heads with the rest of HN about, where a hardware identity token (an electronic ID provided by the state) can be used to participate in official ballots, while both the citizen and the state can have some assurance that there was nothing interceding between them in a malicious way.
Does that make sense?
No.
7 replies →
Entities other than me being able to control what runs on the device I physically posses is absolutely not acceptable in any way. Screw your clients, screw you shareholders and screw you.
Assuming you're using systemd, you already gave up control over your system. The road to hell was already paved. Now, you would have to go out of your way to retain control.
In the great scheme of things, this period where systemd was intentionally designed and developed and funded to hurt your autonomy but seemed temporarily innocuous will be a rounding error.
Nah man, yo are FUDing. systemd might have some poor design choices and arrogant maintainers, but at least I can drop it at any time and my bank wouldn't freak out about it. This one… It's a whole another level.
1 reply →
Do you plan to sell this technology to laptop makers so their laptops will only run the OS they came with?
Or, worse, run any unsupported linux as long as it contains systemd, so no *bsd, etc, and also no manufacturer support?
Laptops already ship secure boot.
Not all. The ones that ship Linux preinstalled and with support don't.
10 replies →
I can turn that crap off. For now.
1 reply →
If they wanted to do that, they already would have. Do you think laptop makers need this technology to limit user freedom this way?
I think https://0pointer.net/blog/authenticated-boot-and-disk-encryp... is a much better explanation of the motivation behind this straight from the horse's mouth. It does a really good job of motivating the need for this in a way that explains why you as the end user would desire such features.
The motivation is nice. The idea has merit.
It's the people behind this project who scare me.
To me this looks bad on so many levels. I hate it immediately.
One good news is that maybe LP will get less involved in systemd.
If you're going to flame it you might as well point out something concrete you don't like about it.
"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."
See Android; or, where you no longer own your device, and if the company decides, you no longer own your data or access to it.
8 replies →
I really hope this would be geared towards clients being able to verify the server state or just general server related usecases, instead of trying to replicate SafetyNet-style corporate dystopia on the desktop.
>Amutable is based out of Berlin, Germany.
Probably obvious from the surnames but this is the first time I've seen a EU company pop up on Hacker News that could be mistaken for a Californian company. Nice to see that ambition.
I understand systemd is controversial, that can be debated endlessly but the executive team and engineering team look very competitive. Will be interesting to see where this goes.
Hello Chris,
I am glad to see these efforts are now under an independent firm rather than being directed by Microsoft.
What is the ownership structure like? Where/who have you received funding from, and what is the plan for ongoing monetization of your work?
Would you ever sell the company to Microsoft, Google, or Amazon?
Thanks.
> Would you ever sell the company to Microsoft, Google, or Amazon?
No matter what the founders say, the answer to this question is always yes.
> Where/who have you received funding from
I don't think you will ever get a response to that
It's pretty normal to say who leads your investing rounds is it not?
I'm not asking for a client list, to be clear.
1 reply →
Lennart will be involved with at least three events at FOSDEM on the coming weekend. The talks seem unrelated at first glance but maybe there will be an opportunity to learn more about his new endeavor.
https://fosdem.org/2026/schedule/speaker/lennart_poettering/
Also see http://amutable.com/events which lists a talk at Open Confidential Computing Conference (Berlin, March)
I don't even know why these kind of user-hostile people are given a platform. This kind of shit is against freedom and user control.
"We are building cryptographically verifiable integrity into Linux systems. Every system starts in a verified state and stays trusted over time."
What does this mean? Why would anyone want this? Can you explain this to me like I'm five years old?
Your computer will come with a signed operating system. If you modify the operating system, your computer will not boot. If you try to install a different operating system, your computer will not boot.
> If you try to install a different operating system, your computer will not boot.
That does not follow. That would only very specifically happen when all of these are true:
1. Secure Boot cannot be disabled
2. You cannot provision your own Secure Boot keys
3. Your desired operating system is not signed by the computer's trusted Secure Boot keys
"Starting in a verified state and stay[ing] trusted over time" sounds more like using measured boot. Which is basically its own thing and most certainly does not preclude booting whatever OS you choose.
Although if your comment was meant in a cynical way rather than approaching things technically, than I don't think my reply helps much.
https://youtu.be/EzSkU3Oecuw?si=1fNV6XkyTv7SfpJs
Good thing, without the power coming from RedHat money, the capacity of ruining the Linux ecosystem will finally be reduced!
Remote attestation requires a great deal of trust... I know this comment is likely to be down-voted, but I can't think of a Lennart Poettering project that didn't try to extend, centralize, and conglomerate Linux with disastrous results in the short term; and less innovation, flexibility, and functionality in the long term. Trading the strength of Unix systems for goal of making them more "Microsoft" like.
Remote attestation requires a great deal of trust, and I simply don't have it when it comes to this leadership team.
How do you plan handle the confused deputy problem?[1]
[1] https://en.wikipedia.org/wiki/Confused_deputy_problem
Everything under the assumption that tampering is a bigger problem then abusive companies controlling your software stack.
This feels like something that's being created for a Microsoft edition of Linux.
Microsoft has fully embraced Linyx now, it's time to move to the next step.
Hi Chris,
One of the most grating pain points of the early versions of systemd was a general lack of humility, some would say rank arrogance, displayed by the project lead and his orbiters. Today systemd is in a state of "not great, not terrible" but it was (and in some circles still is) notorious for breaking peoples' linux installs, their workflows, and generally just causing a lot of headaches. The systemd project leads responded mostly with Apple-style "you're holding it wrong" sneers.
It's not immediately clear to me what exactly Amutable will be implementing, but it smells a lot like some sort of DRM, and my immediate reaction is that this is something that Big Tech wants but that users don't.
My question is this: Has Lennart's attitude changed, or can linux users expect more of the same paternalism as some new technology is pushed on us whether we like it or not?
Thank you for this question, it perfectly captures something that I believe many would like answered.
As someone who's lost many hours troubleshooting systemd failures, I would like an answer to this question, too.
You won't believe how many hours we have lost troubleshooting SysV init and Upstart issues. systemd is so much better in every way, reliable parallel init with dependencies, proper handling of double forking, much easier to secure services (systemd-analyze security), proper timer handling (yay, no more cron), proper temporary file/directory handling, centralized logs, etc.
It improves on about every level compared to what came before. And no, nothing is perfect and you sometimes have to troubleshoot it.
31 replies →
It doesn't smell like DRM, it is literally DRM.
Thank you for formulating the question we all have in such a polite way. This is a masterpiece.
Of course it will not be answered. And that's exactly an answer to your question.
Awful. I hope they fall.
anything that keeps him away from systemd is a good thing.
systemd kept him away from pulseaudio and whoever is/was maintaining that after him was doing a good job of fixing it.
The ultimate fix was to throw it out and replace it. Pipewire is a so much better system.
Why on earth would somebody make a company with one of the the most reviled programmers on earth? Everyone knows that everything he touches turns to shit.
I'll ask the dumb question sorry!
Who is this for / what problem does it solve?
I guess security? Or maybe reproducability?
My guess the problem being solved is how to get acquired by a big Linux vendor.
I thought it was how to plug the user freedom hole. Profits are leaking because users can leave the slop ecosystem and install something that respects their freedom. It's been solved on mobile devices and it needs to be solved for desktops.
[dead]
No. Esp with LP’s track record in systemd.
See: “it’s just an init system”where it’s now also a resolver, log system, etc.
I can buy good intentions, but this opens up too much possibility for not-so-good-intended consequences. Deliberate or emergent.
it's not just a resolver, log system, etc
it's a buggy-as-hell resolver, buggy-as-hell log system, buggy-as-hell ntp client, buggy-as-hell network manager, ...
All vague hand waving at this point and not much to talk about. We'll have to wait and see what they deliver, how it works and the business model to judge how useful it will be.
What might you call a sort of Dunbar's number that counts not social links, but rather the number of things to which a person must actively refuse consent to?
What will they be reinventing from scratch for no reason?
Can someone smarter than myself describe immutability versus atomicity in regards to current operating systems on the market?
Immutability means you can't touch or change some parts of the system without great effort (e.g. macOS SIP).
Atomicity means you can track every change, and every change is so small that it affects only one thing and can be traced, replayed or rolled back. Like it's going from A to B and being able to return back to A (or going to B again) in a determinate manner.
Hopefully he will leave systemd alone and stop closing bugs he doesn't understand now
[dead]
The first steps look similar to secure boot with TPM.
It starts from there, then systemd takes over and carries the flag forward.
See the "features" list from systemd 257/258 [0].
[0]: https://0pointer.net/blog/
So LP is or has left Microsoft ?
>We are building cryptographically verifiable integrity into Linux systems
I wonder what that means ? It could be a good thing, but I tend to think it could be a privacy nightmare depending on who controls the keys.
Verifiable to who? Some remote third party that isn't me? The hell would I want that?
https://0pointer.net/blog/authenticated-boot-and-disk-encryp...
You. The money quote about the current state of Linux security:
> In fact, right now, your data is probably more secure if stored on current ChromeOS, Android, Windows or MacOS devices, than it is on typical Linux distributions.
Say what you want about systemd the project but they're the only ones moving foundational Linux security forward, no one else even has the ambition to try. The hardening tools they've brought to Linux are so far ahead of everything else it's not even funny.
16 replies →
Just an assumption here, but the project appears to be about the methodology to verify the install. Who holds the keys is an entirely different matter.
6 replies →
The events includes a conference title "Remote Attestation of Imutable Operating Systems built on systemd", which is a bit of a clue.
I'm sure this company is more focused on the enterprise angle, but I wonder if the buildout of support for remote attestation could eventually resolve the Linux gaming vs. anti-cheat stalemate. At least for those willing to use a "blessed" kernel provided by Valve or whoever.
9 replies →
Yes, I have.
rust-vmm-based environment that verifies/authenticates an image before running ? Immutable VM (no FS, root dropper after setting up network, no or curated device), 'micro'-vm based on systemd ? vmm captures running kernel code/memory mapping before handing off to userland, checks periodically it hasn't changed ? Anything else on the state of the art of immutable/integrity-checking of VMs?
Sounds like kernel mode DRM or some similarly unwanted bullshit.
It's probably built on systemd's Secure Boot + immutability support.
As said above, it's about who controls the keys. It's either building your own castle or having to live with the Ultimate TiVo.
We'll see.
7 replies →
> Sounds like kernel mode DRM or some similarly unwanted bullshit.
Look, I hate systemd just as much as the next guy - but how are you getting "DRM" out of this?
30 replies →
I see the use case for servers targeted by malicious actors. A penetration test on an hardened system with secure boot and binary verification would be much harder.
For individuals, IMO the risk mostly come from software they want to run (install script or supply chain attack). So if the end user is in control of what gets signed, I don't see much benefit. Unless you force users to use an app store...
Coming from software supply chain, I am excited to see such a cracked team handle this problem and I wish we talked more about this in FOSS land.
Why have the responses to the post from the CEO been moved to their own top-level posts? Also, why are replies disabled for the CEO post?
Because the feedback is overwhelmingly negative and thus deemed useless for them.
The immediate concern seeing this is will the maintainer of systemd use their position to push this on everyone through it like every other extended feature of systemd?
Whatever it is, I hope it doesn't go the usual path of a minimal support, optional support and then being virtually mandatory by means of tight coupling with other subsystems.
Daan here, founding engineer and systemd maintainer.
So we try to make every new feature that might be disruptive optional in systemd and opt-in. Of course we don't always succeed and there will always be differences in opinion.
Also, we're a team of people that started in open source and have done open source for most of our careers. We definitely don't intend to change that at all. Keeping systemd a healthy project will certainly always stay important for me.
Hi Daan,
Thanks for the answer. Let me ask you something close with a more blunt angle:
Considering most of the tech is already present and shipping in the current systemd, what prevents our systems to become a immutable monolith like macOS or current Android with the flick of a switch?
Or a more grave scenario: What prevents Microsoft from mandating removal of enrollment permissions for user keychains and Secure Boot toggle, hence every Linux distribution has to go through Microsoft's blessing to be bootable?
20 replies →
Thanks Daan for your contributions to systemd.
If you were not a systemd maintainer and have started this project/company independently targeting systemd, you would have to go through the same process as everyone and I would have expected the systemd maintainers to, look at it objectively and review with healthy skepticism before accepting it. But we cannot rely on that basic checks and balances anymore and that's the most worrying part.
> that might be disruptive optional in systemd
> we don't always succeed and there will always be differences in opinion.
You (including other maintainers) are still the final arbitrator of what's disruptive. The differences of opinion in the past have mostly been settled as "deal with it" and that's the basis of current skepticism.
2 replies →
>We are building cryptographically verifiable integrity into Linux systems. Every system starts in a verified state and stays trusted over time.
What problem does this solve for Linux or people who use Linux? Why is this different from me simply enabling encryption on the drive?
12 replies →
> we try to make every new feature that might be disruptive optional in systemd and opt-in
I find it hard to believe. Like, at all. Especially given that the general posture of your project leader is the exact opposite of that.
> systemd a healthy project
I can see that we share the same view that there are indeed differences in opinion.
> will the maintainer of systemd use their position to push this on everyone
Can you imaging the creator of systemd not to?
systemd is the most well supported init systemd there.
Frankly this disgusts me. While there are technically user-empowering ways this can be used, by far the most prevalent use will be to lock users/customers out of true ownership of their own devices.
Device attestation fails? No streaming video or audio for you (you obvious pirate!).
Device attestation fails? No online gaming for you (you obvious cheater!).
Device attestation fails? No banking for you (you obvious fraudster!).
Device attestation fails? No internet access for you (you obvious dissident!).
Sure, there are some good uses of this, and those good uses will happen, but this sort of tech will be overwhelmingly used for bad.
1. Are reproducible builds and transparency logging part of your concept?
2. Are you looking for pilot customers?
Damn, you are thirsty!
Are these some problems you've personally been dealing with?
I just want more trustworthy systems. This particular concept of combining reproducible builds, remote attestation and transparency logs is something I came up with in 2018. My colleagues and I started working on it, took a detour into hardware (tillitis.se) and kind of got stuck on the transparency part (sigsum.org, transparency.dev, witness-network.org).
Then we discovered snapshot.debian.org wasn't feeling well, so that was another (important) detour.
Part of me wish we had focused more on getting System Transparency in its entirety in production at Mullvad. On the other hand I certainly don't regret us creating Tillitis TKey, Sigsum, taking care of Debian Snapshot service, and several other things.
Now, six years later, systemd and other projects have gotten a long way to building several of the things we need for ST. It doesn't make sense to do double work, so I want to seize the moment and make sure we coordinate.
1 reply →
These kinds of problems are very common in certain industries.
Trusted computing and remote attestation is like two people who want to have sex requiring clean STD tests first. Either party can refuse and thus no sex will happen. A bank trusting a random rooted smartphone is like having sex with a prostitute with no condom. The anti-attestation position is essentially "I have a right to connect to your service with an unverified system, and refusing me is oppression." Translate that to the STD context and it sounds absurd - "I have a right to have sex with you without testing, and requiring tests violates my bodily autonomy."
You're free to root your phone. You're free to run whatever you want. You're just not entitled to have third parties trust that device with their systems and money. Same as you're free to decline STD testing - you just don't get to then demand unprotected sex from partners who require it.
But I'm not having sex with my bank.
You do know what analogies are, right?
4 replies →
You are trying to portrait it as an exchange between equal parties which it isn't. I am totally entitled not to have to use a thrid-party-controlled device to access government services. Or my bank account.
remote attestation is just fancy digital signatures with hardware protected secret keys. Are you freaking out about digital signatures used anywhere else?
6 replies →
> You're just not entitled to have third parties trust that device with their systems and money.
But its a bank, right? Its my money.
If malware on your phone steals it the bank could be on the hook. The bank can set terms on how you access their computers.
4 replies →
I always wondered how this works in practice for "real time" use cases because we've seen with secure boot + tpm that we can attest that the boot was genuine at some point in the past, what about modifications that can happen after that?
A full trusted boot chain allows you to use a reboot to revert back to a trusted state after suspected runtime compromise.
Can you share more details at this point about what you are trying to tackle as a first step?
As per the announcement, we’ll be building this over the next months and sharing more information as this rolls out. Much of the fundamentals can be extracted from Lennart’s posts and the talks from All Systems Go! over the last years.
I'm sorry, you're "happy to answer questions" and this is your reply to such a softball? What kind of questions will you answer? Favorite color?
2 replies →
Probably also some of the things that were described here? https://0pointer.net/blog/fitting-everything-together.html
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.
Great; how can I short it?
The photos depict these people as funny hobbits :D. Photographer trolled them big time. Now, the only question left is whether their feet are hairy.
---
Making secure boot 100 times simpler would be a deffo plus.
I'm not seeing any big problems with the portraits.
Having said that, should this company not be successful, Mr Zbyszek Jędrzejewski-Szmek has potentially a glowing career as an artists' model. Think Rembrandt sketches.
I look forward to something like ChromeOS that you can just install on any old refurbished laptop. But I think the money is in servers.
Are you guys hiring? I can emulate a grim smile and have no problem being diabolical if the pay is decent so maybe I am a good fit? I can also pet goats
this is very interesting... been watching the work around bootc coupling with composefs + dm_verity + signed UKI, I'm wondering if this will build upon that.
So I imagine Lennart Poettering has left Microsoft.
Rodrigo from the Amutable team here. Yes, Lennart has left Microsoft.
Ah, thanks for jumpin in.
Are there VCs who participated in funding this or are you self funded?
I chuckle because their official adress is just 20 minutes from my home / current location.
I wish you great success
- How different is this from Fedora BlueFin or silverblue?
- it looks like they want to build a ChromeOS without Google.
fantastic news, congrats on launching! it's a great mission statement a fanstastic ensemble for the job
Will this do remote attestation ? What hardware platforms will it support? (Intel sgx, AMD sev, AWS nitro?)
Some people just can't stop making other's lives more miserable, can they.
Is this headed towards becoming a new Linux distribution or hardening existing ones?
So much negativity in this thread. I actually think this could be useful, because tamper-proof computer systems are useful to prevent evil maid attacks. Especially in the age of Pegasus and other spyware, we should also take physical attack vectors into account.
I can relate to people being rather hostile to the idea of boot verification, because this is a process that is really low level and also something that we as computer experts rarely interact with more deeply. The most challenging part of installing a Linux system is always installing the boot loader, potentially setting up an UEFI partition. These are things that I don't do everyday and that I don't have deep knowledge in. And if things go wrong, then it is extra hard to fix things. Secure boot makes it even harder to understand what is going on. There is a general lack of knowledge of what is happening behind the scenes and it is really hard to learn about it. I feel that the people behind this project should really keep XKCD 2501 in mind when talking to their fellow computer experts.
> I actually think this could be useful
Yeah it could be. Could. But it also could be used for limiting freedoms with general purpose computing. Guess what is it going to be?
> hostile to the idea of boot verification, because this is a process that is really low level
Not because of that.
Because it's only me who gets to decide what runs on my computer, not someone else. I don't need LP's permission to run binaries.
I personally do not worry about an evil maid attack _at all_. But I do worry about someone restricting what I can do with _my_ computer.
I mean, in theory, the idea is great. But it WILL be misused by greedy fucks.
Will you always offer an option to end users to disable the system if they so desire?
it won’t matter if you disable it. You simply won’t be able to use your PC with any commercial services, in the same way that a rooted android installation can’t run banking apps without doing things to break that, and what they’re working on here aims to make that “breakage“ impossible.
They will. Just like they pretend it's the distros who made systemd ubiquitous.
So it's going to be someone disabling this for end users.
Looking forward to never using any of this, quite frankly; and hoping it remains optional for the kernel.
If there’s a path to profitability, great for them, and for me too; because it means it won’t be available at no charge.
No one wants this for their computer.
These kind of technologies are forced on users.
How do they plan to make Linux (with MLoCs...) deterministic?
Why not adopt seL4 like everybody else who is not outright delusional[0][1]?
0. https://sel4.systems/Foundation/Membership/
1. https://sel4.systems/use.html
How long until you have SIL-4 under control and can demonstrate it?
Great team, hope you guys all the best!
Just get a Mac, I guess.
Terrible idea, I hope go bankrupt.
I can see like a 100 ways this can make computing worse for 99% people and like 1-2 scenarios where it might actually be useful.
Like if the politicians pushing for chat control/on device scanning of data come knocking again and actually go through (they can try infinitely) tech like this will really be "useful". Oops your device cannot produce a valid attestation, no internet for you.
Hmph, AFAIK systemd has been struggling with TPM stuff for a while (much longer than I anticipated). It’s kinda understandable that the founder of systemd is joining this attestation business, because attestation ultimately requires far more than a stable OS platform plus an attestation module.
A reliably attestable system has to nail the entire boot chain: BIOS/firmware, bootloader, kernel/initramfs pairs, the `init` process, and the system configuration. Flip a single bit anywhere along the process, and your equipment is now a brick.
Getting all of this right requires deep system knowledge, plus a lot of hair-pulling adjustment, assuming if you still have hair left.
I think this part of Linux has been underrated. TPM is a powerful platform that is universally available, and Linux is the perfect OS to fully utilize it. The need for trust in digital realm will only increase. Who knows, it may even integrate with cryptocurrency or even social platforms. I really wish them a good luck.
It might be a good time to rewrite systemd in rust...
Amazing, I wish them great success! <3
amutable -k
I knew they had an authoritarian streak. This is not surprising, and frankly horrifyingly dystopian.
"Those who give up freedom for security deserve neither."
The typical HN rage-posting about DRM aside, there's no reason that remote attestation can't be used in the opposite direction: to assert that a server is running only the exact code stack it claims to be, avoiding backdoors. This can even be used with fully open-source software, creating an opportunity for OSS cloud-hosted services which can guarantee that the OSS and the build running on the server match. This is a really cool opportunity for privacy advocates if leveraged correctly - the idea could be used to build something like Apple's Private Cloud Compute but even more open.
Like evil maid attacks, this is a vanishingly rare scenario brought out to try to justify technology that will overwhelmingly be used to restrict computing freedom.
In addition, the benefit is a bit ridiculous, like that of DRM itself. Even if it worked, literally your "trusted software" is going to be running in an office full of the most advanced crackers money can buy, and with all the incentive to exploit your schema but not publish the fact that they did. The attack surface of the entire thing is so large it boggles the mind that there are people who believe on the "secure computing cloud" scenario.
WHAT is the usage and benefit for private users? This is always neglected.
avoiding backdoors as a private person you always can only solve with having the hardware at your place, because hardware ALWAYS can have backdoors, because hardware vendors do not fix their shit.
From my point of view it ONLY gives control and possibilities to large organizations like governments and companies. which in turn use it to control citizens
You're absolutely right, but considering Windows requirements drive the PC spec, this capability can be used to force Linux distributions in bad ways.
So, some of the people doing "typical HN rage-posting about DRM" are also absolutely right.
The capabilities locking down macOS and iOS and related hardware also can be used for good, but they are not used for that.
> but considering Windows requirements drive the PC spec, this capability can be used to force Linux distributions in bad ways
What do you mean by this?
Is the concern that systemd is suddenly going to require that users enable some kind of attestation functionality? That making attestation possible or easier is going to cause third parties to start requiring it for client machines running Linux? This doesn't even really seem to be a goal; there's not really money to be made there.
As far as I can tell the sales pitch here is literally "we make it so you can assure the machines running in your datacenter are doing what they say they are," which seems pretty nice to me, and the perversions of this to erode user rights are either just as likely as they ever were or incredibly strange edge cases.
8 replies →
> there's no reason that remote attestation can't be used in the opposite direction
There is: corporate will fund this project and enforce its usage for their users not for the sake of the users and not for the sake of doing any good.
What it will be used for is to bring you a walled garden into Linux and then slowly incentivize all software vendors to only support that variety of Linux.
LP has a vast, vast experience in locking down users' freedom and locking down Linux.
> There is: corporate will fund this project and enforce its usage for their users not for the sake of the users and not for the sake of doing any good.
I'd really love to see this scenario actually explained. The only place I could really see client-side desktop Linux remote attestation gaining any foothold is to satisfy anti-cheat for gaming, which might actually be a win in many ways.
> What it will be used for is to bring you a walled garden into Linux and then slowly incentivize all software vendors to only support that variety of Linux.
What walled garden? Where is the wall? Who owns the garden? What is the actual concrete scenario here?
> LP has a vast, vast experience in locking down users' freedom and locking down Linux.
What? You can still use all of the Linuxes you used to use? systemd is open source, open-application, and generally useful?
Like, I guess I could twist my brain into a vision where each Ubuntu release becomes an immutable rootfs.img and everyone installs overlays over the top of that, and maybe there's a way to attest that you left the integrity protection on, but I don't really see where this goes past that. There's no incentive to keep you from turning the integrity protection off (and no means to do so on PC hardware), and the issues in Android-land with "typical" vendors wanting attestation to interact with you are going to have to come to MacOS and Windows years before they'll look at Linux.
5 replies →
intel have had a couple of goes at this
and each time the doors have been blasted wide off by huge security vulnerabilities
the attack surface is simply too large when people can execute their own code nearby
it doesn't stop remote code injection. Protecting boot path is frankly hardly relevant on server compared to actual threats.
You will get 10000 zero days before you get a single direct attack at hardware
The idea is that by protecting boot path you build a platform from which you can attest the content of the application. The goal here is usually that a cloud provider can say “this cryptographic material confirms that we are running the application you sent us and nothing else” or “the cloud application you logged in to matched the one that was audited 1:1 on disk.”
Really excited to a company investing into immutable and cryptographically verifiable systems. Two questions really:
1. How will the company make money? (You have probably been asked that a million times :).)
2. Similar to the sibling: what are the first bits that you are going to work on.
At any rate, super cool and very nice that you are based in EU/Germany/Berlin!
1. We are confident we have a very robust path to revenue.
2. Given the team, it should be quite obvious there will be a Linux-based OS involved.
Our aims are global but we certainly look forward to playing an important role in the European tech landscape.
"We are confident we have a very robust path to revenue."
I take it that you are not at this stage able to provide details of the nature of the path to revenue. On what kind of timescale do you envisage being able to disclose your revenue stream/subscribers/investors?
7 replies →
How do you take the generally negative feedback from the community here?
I have no more information about your product that you have shared but I'm already scared and extremely pessimistic given the team and the ambition.
Appreciate the clarification, but this actually raises more questions than it answers.
A "robust path to revenue" plus a Linux-based OS and a strong emphasis on EU / German positioning immediately triggers some concern. We've seen this pattern before: wrap a commercially motivated control layer in the language of sovereignty, security, or European tech independence, and hope that policymakers, enterprises, and users don't look too closely at the tradeoffs.
Europe absolutely needs stronger participation in foundational tech, but that shouldn't mean recreating the same centralized trust and control models that already failed elsewhere, just with an EU flag on top. 'European sovereignty' is not inherently better if it still results in third-party gatekeepers deciding what hardware, kernels, or systems are "trusted."
Given Europe's history with regulation-heavy, vendor-driven solutions, it's fair to ask:
Who ultimately controls the trust roots?
Who decides policy when commercial or political pressure appears?
What happens when user interests diverge from business or state interests?
Linux succeeded precisely because it avoided these dynamics. Attestation mechanisms that are tightly coupled to revenue models and geopolitical branding risk undermining that success, regardless of whether the company is based in Silicon Valley or Berlin.
Hopefully this is genuinely about user-verifiable security and not another marketing-driven attempt to position control as sovereignty. Healthy skepticism seems warranted until the governance and trust model are made very explicit.
We detached this subthread from https://news.ycombinator.com/item?id=46784719.
[dead]
[flagged]
[flagged]
You're right, they shouldn't have started a company, that would be better for diversity.
[flagged]
No personal attacks on HN, please.
https://news.ycombinator.com/newsguidelines.html
Please delete my account. Thanks
This is relevant. Every project he's worked on has been a dumpster fire. systemd sucks. PulseAudio sucks. GNOME sucks. Must the GP list out all the ways in which they suck to make it a more objective attack?
2 replies →
[flagged]
[flagged]
Who cares. That is all irrelevant.
I want to know if they raised VC money or not.
Either way at least it isn't anything about AI and has something to do with hard cryptography.
[flagged]
1 reply →
[flagged]
[flagged]
[flagged]
[flagged]
3 replies →
Disgusting.
People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
Imagine you're using a program hosted on some cloud service S. You send packets over the network; gears churn; you get some results back. What are the problems with such a service? You have no idea what S is doing with your data. You incur latency, transmission time, and complexity costs using S remotely. You pay, one way or another, for the infrastructure running S. You can't use S offline.
Now imagine instead of S running on somebody else's computer over a network, you run S on your computer instead. Now, you can interact with S with zero latency, don't have to pay for S's infrastructure, and you can supervise S's interaction with the outside world.
But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
However --- if S's author could run S on your computer in such a way that he could prove you haven't tampered with S or haven't observed its secrets, he can let you run S on your computer without giving up control over S. Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
Sure, in this deployment model, just like in the cloud world, you wouldn't be able to run a custom S: but so what? You don't get to run your custom S either way, and this way, relative to cloud deployment, you get better performance and even a little bit more control.
Also, the same thing works in reverse. You get to run your code remotely in a such a way that you can trust its remote execution just as much as you can trust that code executing on your own machine. There are tons of applications for this capability that we're not even imagining because, since the dawn of time, we've equated locality with trust and can now, in principle, decouple the two.
Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
> People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.
But it won't be used like that. It will be used to take user freedoms out.
> But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.
That use case you're describing is already there and is currently being done with DRM, either in browser or in app itself.
You are right in the "it will make easier for app user to do it", and in theory it is still better option in video games than kernel anti-cheat. But it is still limiting user freedoms.
> Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.
Majority of the uses will be user-hostile things. Because those are only cases where someone will decide to fund it.
> Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?
To be honest, mainly companies need that. personal users do not need that. And additionally companies are NOT restrained by governments not to exploit customers as much as possible.
So... i also see it as enslaving users. And tell me, for many private persons, where does this actually give them for PRIVATE persons, NOT companies a net benefit?
additionally:
> This potential shouldn't prevent our inventing new kinds of tool.
Why do i see someone who wants to build an atomic bomb for shit and giggles using this argument, too? As hyperbole as my argument is, the argument given is not good here, as well.
The immutable linux people build tools, without building good tools which actually make it easier for private people at home to adapt a immutable linux to THEIR liking.
1 reply →
I will put some trust into these people if they make this a pure nonprofit organization at the minimum. Building ON measures to ensure that this will not be pushed for the most obvious cases, which is to fight user freedom. This shouldn't be some afterthought.
"Trust us" is never a good idea with profit seeking founders. Especially ones who come from a culture that generally hates the hacker spirit and general computing.
You basically wrote a whole narrative of things that could be. But the team is not even willing to make promises as big as yours. Their answers were essentially just "trust us we're cool guys" and "don't worry, money will work out" wrapped in average PR speak.
> trust us we're cool guys
I'm guessing you're referencing my comment, that isn't what I said.
> But the team is not even willing to make promises as big as yours.
Be honest, look at the comment threads for this announcement. Do you honestly think a promise alone would be sufficient to satisfy all of the clamouring voices?
No, people would (rightfully!) ask for more and more proof -- the best proof is going to be to continue building what we are building and then you can judge it on its merits. There are lots of justifiable concerns people have in this area but most either don't really apply what we are building or are much larger social problems that we really are not in a position to affect.
I would also prefer to be to judged based my actions not on wild speculation about what I might theoretically do in the future.
> bad actors can use attestation technology to do all sorts of user-hostile things
Not just can. They will use it.
Shall it be backdoorable like systemd-enabled distro nearly had a backdoorable SSH? For non-systemd distro weren't affected.
Why should we trust microsofties to produce something secure and non-backdoored?
And, lastly, why should Linux's security be tied to a private company? Oooh, but it's of course not about security: it's about things like DRM.
I hope Linus doesn't get blinded here: systemd managed to get PID 1 on many distros but they thankfully didn't manage, yet, to control the kernel. I hope this project ain't the final straw to finally meddle into the kernel.
Currently I'm doing:
But Promox is Debian based and Debian really drank too much of the systemd koolaid.
So my plan is:
And then I'll be, at long last, systemd-free again.
This project is an attack on general-purpose computing.
First thing that comes to mind is anti cheat software. Would that be something solved if these objectives are achieved?
Cheating was solved before any of this rootkit level malware horseshit.
Community ran servers with community administration who actually cared about showing up and removing bad actors and cheaters.
Plenty of communities are still demonstrating this exact fact today.
Companies could 100% recreate this solution with fully hosted servers, with an actually staffed moderation department, but that slightly reduces profit margins so fuck you. Keep in mind community servers ran on donations most of the time. That's the level of profit they would lose.
Companies completely removed community servers as an option instead, because allowing you to run your own servers means you could possibly play the game with skins you haven't paid for!!! Oh no!!! Getting enjoyment without paying for it!!!
All software attempts at anti-cheat are impossible. Even fully attested consoles have had cheats and other ways of getting an advantage that you shouldn't have.
Cheating isn't defined by software. Cheating is a social problem that can only be solved socially. The status quo 20 years ago was better.
Everyday the world is becoming more polarized. Technology corporations gain ever more control over people's lives, telling people what they can do on their computers and phones, what they can talk about on social platforms, censoring what they please, wielding the threat of being cutoff from their data, their social circles on a whim. All over the world, in dictatorships and also in democratic countries, governments turn more fascist and more violent. They demonstrate that they can use technology to oppress their population, to hunt dissent and to efficiently spread propaganda.
In that world, authoring technology that enables this even more is either completely mad or evil. To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom, acceptance of risk. If you build software that actively intends to take this away from me to put it into the hands of economic interests and political actors then you deserve all the hate you can get.
> To me Linux is not a technological object, it is also a political statement. It is about choice, personal freedom ...
I use Linux since the Slackware day. Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
This is not about security for the users. It's about control.
At least many in this thread are criticizing the project.
And, once again of course, it's from a private company.
Full of ex-Microsofties.
I don't know why anyone interested in hacking would cheer for this. But then maybe HN should be renamed "CN" (Corporate News) or "MN" (Microsoft News).
> Poettering is the worse thing that happened to the Linux ecosystem and, of course, he went on to work for Microsoft. Just to add a huge insult to the already painful injury.
agreed, and now he's planning on controlling what remains of your machine cryptographically!
> I use Linux since the Slackware day. Poettering is the worse thing that happened to the Linux ecosystem
Same here, Linux since about 1995. Same opinion.
> And, once again of course, it's from a private company. Full of ex-Microsofties.
And funded. And confident they will sell the product well.
Lennart Poettering. The leading expert in forcing things down your throat. Great.
For all those people saying negative please see all the comments when RedHat was acquired by IBM (2018)
https://news.ycombinator.com/item?id=18321884
- Linux is better now
- Nothing bad
Surely Redhat has gone from being the defacto default Linux to relative obscurity?
Been wanting this ever since doing it in Fuchsia. Really excited to see added focus and investment in this for the Linux ecosystem.