OpenClaw isn't fooling me. I remember MS-DOS

5 hours ago (flyingpenguin.com)

Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.

  • I find some value as kinda a better alexa.

    I have it hooked up to my smart home stuff, like my speaker and smart lights and TV, and I've given it various skills to talk to those things.

    I can message it "Play my X playlist" or "Give me the gorillaz song I was listening to yesterday"

    I can also message it "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay.

    It having a browser and the ability to run cli tools, and also understand English well enough to know that "Give me some Beatles" means to use its audio skill, means it's a vastly better alexa

    It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

    • > It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

      I have a hard time imagining how much better Alexa would have to be for me to spend $180/month on it...

      20 replies →

    • > It only costs me like $180 a month in API credits

      In The Netherlands you can get a live-in au-pair from the Philippines for less than that. She will happily play your Beatles song, download the Titanic movie for you, find your Gorillaz song and even cook and take care of your children.

      It's horrible that we have such human exploitation in 2026, but it does put into perspective how much those credits are if you can get a real-life person doing those tasks for less.

      22 replies →

    • > "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay

      You could build up a legitimate collection for much less than $180/mo.

    • 180 grand a month for PA is a lot of money. But I guess each person has its own priority. I mean, I can pay a very fancy gym with that price instead of the shitty popular one I go, which would probably improve my well being much more than asking to play Gorillaz

      1 reply →

    • Am I right to be a little concerned by the phrase "it'll go straight to the pirate bay"?

      Not to be a narc or anything, but is OpenClaw liable to just perform illegal acts on your behalf just because it seemed like that's what you meant for it to do?

      1 reply →

    • 180$/month to queue playlists does not “seem okay” at all. We must be living in different worlds.

    • Using OpenClaw for that is nuts. Claude or GPT could just one shot an app for you that does all that and uses 0 tokens once you've built it.

    • Regarding Alexa, none of those use cases sound that useful to have an ever-present listening device at home, except if one is bedbound or something.

    • I have the almost same thing using a network connected raspberry-pi and no AI.

  • Many wealthy people use human assistants to offload mundane work.

    This is cheap replacement for ordinary people.

    It's going to be big. But probably it's best to wait for Google and Apple to step up their assistants.

    • Yes, and that's because the workflow of those people generally requires managing a crazy, dynamic schedule including travel, meetings, comms, etc. Those folks need real humans with long-term memories and incentives to establish trust for managing these high-stakes engagements. Their human assistants might find these things useful, but there's zero chance Bill Gates is having an AI schedule his travel plans or draft his text messages.

      OTOH, this isn't an issue for "ordinary people". They go to work, school, children's sports events, etc. If they had an assistant for free, most of them would probably find it difficult to generate enough volume to establish the muscle memory of using them. In my own professional life, this occurred with junior lawyers and legal assistants--the juniors just never found them useful because they didn't need them even though they were available. Even the partners ended up consolidating around sharing a few of them for the same reason.

      Down in this thread someone mentions it being an advanced Alexa, which seems apt. Yes, a party novelty but not useful enough to be top of mind in the every day work flow.

      5 replies →

    • My 2 cents is that so far LLMs have had a bad track record in replacing people in jobs where simple software logic and flowcharts wouldn't do the job.

    • I'm not sure how solvable it is. It only takes one screw up to ruin the reputation, and a screw up is basically guaranteed.

      The tech has existed for a while but nobody sane wants to be the one who takes responsibility for shipping a version of this thing that's supposed to be actually solid.

      Issues I saw with OpenClaw:

      - reliability (mostly due to context mgmt), esp. memory, consistency. Probably solvable eventually

      - costs, partly solvable with context mgmt, but the way people were using it was "run in the background and do work for me constantly" so it's basically maxing out your Claude sub (or paying hundreds a day), the economics don't work

      - you basically had to use Claude to get decent results, hence the costs (this is better now and will improve with time)

      - the "my AI agent runs in a sandboxed docker container but I gave it my Gmail password" situation... (The solution is don't do that, lol)

      See also simonw's "lethal trifecta":

      >private data, untrusted content, and external communication

      https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

      The trifecta (prompt injection) is sorta-kinda solved by the latest models from what I understood. (But maybe Pliny the liberator has a different opinion!)

    • $180 a month is huge for "ordinary people".

      So I guess that leaves the in-between people who don't care about spending $180 every month but don't have any personal staff yet or even access to concierge services.

    • The problem is that if you're wealthy enough to hire someone to do your errands, those errands likely aren't very mundane - the exception is a socialite giving their friend a low-effort job, but executive assistants are paid well because their jobs are cognitively demanding.

      OTOH a lower-middle-class Joe like me really does have a lot of mundane social/professional errands, which existing software has handled just fine for decades. I suppose on the margins AI might free up 5 minutes here or there around calendar invites / etc, but at the cost of rolling snake eyes and wasting 30 minutes cleaning up mistakes. Even if it never made mistakes, I just don't see the "personal assistant" use case really taking off. And it's not how people use LLMs recreationally.

      Really not trying to say that LLM personal assistants are "useless" for most people. But I don't think they'll be "big," for the same reason that Siri and Alexa were overhyped. It's not from lack of capability; the vision is more ho-hum than tech folks seem to realize.

  • I can see a value in a smarter email-inbox sorting algorithm - but only because all major players (except google which I don't trust with my mails) have abandoned bayesian email filtering with training. This was standard in 2005 in such basic clients such as the Opera browser, but somehow we lost this technology along the way.

    • I was an original Thunderbird pre-1.0 (from 2003) user and prior to that, Netscape Mail, and am quite certain it has had bayesian spam filtering all this time, at least since the late ‘90s. That was a headline feature in the early days. My first email account used POP3 through a shared web host for my own domain in that era.

      Edit: Yes it’s still there https://support.mozilla.org/en-US/kb/thunderbird-and-junk-sp...

    • I can't recall the name, but I vaguely remember a Bayesian spam filter for arbitrary POP3 accounts in the 2000s that had a local web frontend, and how excited I was at its effectiveness.

      I believe that the shift from "my one computer" to multiple clients (computer + phone + webmail) probably has something to do with it. Even with IMAP sharing state, you still don't have a great way to see and control the filtering, except by moving things in/out of spam folders.

  • I ran OpenClaw in a container, on a VPS without connection to messaging systems, so perhaps that is why I didn't get value.

    Similarly, I have been using Hermes Agent also inside a container, and on a VPS with only access to a local directory in the VPS with a dozen active projects on GitHub. I don't give it access to my GitHub credentials, but allow it to work in whatever branch is checked out.

    This setup is fabulously productive. I use it about every other day to perform some meaningful task for me. It is inexpensive also. A task might take 20 minutes and cost $0.25 in GLP-5.1 API costs.

    So TLDR: out of the box, I use Hermes at least one hour a week and find it to be a wonderful tool.

  • I see the appeal, but I also see the risks.

    If you ignore the risks I don't see why it's hard to see value.

    The AI can read all your email, that's useful. It can delete them to free up space after deciding they are useless. It can push to GitHub. The more of your private info and passwords you give it the more useful it becomes.

    That's all great, until it isn't.

    Putting firewalls in place is probably possible and obviously desirable but is a bit of a hassle and will probably reduce the usefulness to some degree, so people won't. We'll all collectively touch the stove and find out that it is hot.

    • Just limit the tooling. There's no reason for the AI to be able to delete emails for example.

      I built a fastmail CLI tool for my *claw and it can only read mails, that's it. I might give it the ability to archive and label later on, with a separate log of actions so I can undo any operation it did easily.

      It's pretty decent at going "hey, there's a sale on $thing at $store", for mails, but that's about it.

  • This is being asked on pretty much every Openclaw thread, and the use cases brought up seem roughly similar: digital assistant.

    It of course depends heavily on your work, but my work is 50% communication / overseeing, and I simply lose track of everything.

    I don’t give it any credentials of any sort, but I run data pipelines on an hourly basis that ingest into the agent’s workspace.

  • > letting an AI into my comms

    Idk, it's strange for me to think of it that way. It's tech. If it does something useful, that's cool.

    Data protection is always a consideration. I just don't consider a LLM to be a special case or a person, the same way that I don't have strong feelings about "AI" being applied in google search since forever. I don't have special feelings or get embarrassed by the thought of a LLM touching my mails.

    Right now for me, agentic coding is great. I have a hard time seeing a future where the benefits that we experience there will not be more broadly shared. Explorations in that direction is how we get there.

    • My issues aren’t really with privacy so much as what the failure modes look like, and, more fundamentally, with becoming a passenger to my own life.

    • The problem for me is not the LLM reading it. The problem is the company behind it can most likely recover the sessions. That is a problem since they could share it with whomever they want. Even if they are fully incorruptable it's also not uncommon that they simply get hacked and all this data ends up on the open market.

  • > Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks?

    Mostly (but of course, not exclusively), porn for the techies. Receiving a phone notification every time a PR is opened on a project of yours? Exciting or sad, depends on one's outlook on life.

  • There is value but it is hard to discover and extract outside of a few known areas - like coding, etc.

    • Yes, I can see the (potential) value in working with agents in software development. The “claw” movement I understood to suggest value in less constrained access to my inbox, personal messages, calendar etc like some sort of PA. It’s hard to quantify how much damage a bad PA can do to someone’s personal and professional life, so if my understand is correct, this seems like a dead end.

      2 replies →

  • Same here, I care to the extent I am obligated to, and staying relevant for finding a job.

  • It's pretty much just Claude Code, except hooked up to your Telegram / WhatsApp / iMessage.

    I don't know why they don't make an official integration for it. Probably cause they're already out of GPUs lol

  • It all depends on what you do aka your use case. If you're in the content creatio business, which is part of my responsibilities, then yes has been massively helpful. For other roles, I can absolutely see no use case or benefit. Context matters, like with everything.

  • Agent environments like OpenClaw are in the toy phase, and OpenClaw is teaching people how to build things with agents in a toy-like and unreliable way. I used my understanding of OpenClaw to build scalable + secure + auditable agent infrastructure in my platform such that I can build products that other people can use.

    • We had better agent infrastructures (namely JADE) back in the day. I worked with them, and now these things look like flimsy 50¢ plastic toys to me, too.

  • No.

    But I am someone that, for example, dislikes home automation. Know that thing that you ask Alexa to open your curtains? I think that is cringe af.

    Maybe there's an overlap with the crowd that likes that.

  • Eh, buddy says he uses them for his network and, apparently, some light IT maintenance for his family members. So far it seems to be working for him. I am not that brave.

I don’t get this OpenClaw hype.

When people vibe-code, usually the goal is to do something.

When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).

  • The idea is to get a virtual personal assistant. Like Siri or Gemini but with access to all of your accounts, computers, etc. (Well whatever you give it access to). Like having a butler with access to your laptop.

    From what I understand, the main appeal isn't the end result, but building that AI personal assistant as a hobby is the appeal.

    • With a goal like this I could, at least on paper, find it useful... But I'm curious to see if this goal is really achievable, or if it easily is

  • In the early 1980’s, what did people use home computers such as Atari’s and Commodore 64’s for? Mostly playing games; nerds also used their computer with the goal seeming to be… using their computer.

    It wasn’t (only) that, though; they also learned, so that, when people could afford to buy computers that were really useful, there were people who could write useful programs, administer them, etc.

    Same thing with 3D printers a decade or so ago. What did people use them for? Mostly tinkering with hard- and software for days to finally get them to print some teapot or rabbit they didn’t need or another 3D printer.

    This _may_ be similar, with OpenClaw-like setups eventually getting really useful and safe enough for mere mortals.

    But yes, the risks are way larger than in those cases.

    Also, I think there are safer ways to gain the necessary expertise.

  • I have OC on a VPS. So far it's a way for me to play with non-Claude models and try to get them to get OC under control. So far I'm about $200 all in and OC is still not under control. Every few weeks it goes on an ACP bender and blows my credits in hidden sub-agents for no damn reason. I'm determined to break this horse though, it's like a fun video game with a glitchy end boss.

    • For how long have you been using it for it to have consumed $200? For me it sounds like a lot (still a student) but it doesn't seem to be the same for you

  • It’s basically a reimagined n8n like low code platform with LLM magic. Digital glue

    That’s why there isn’t a coherent use story because like glue the answer is whatever the user needs to glue/get done

  • The main "sales pitch" appears to be "You can have the computer do things for you without having to learn how to use a computer" (at the cost of now having to learn how to use a massively overcomplicated and fundamentally unreliable system; It's just an illusion of ease of use.)

    The thread's linked article is about comparing MS-DOS' security, but the comparison works on another level as well: I remember MS-DOS. When the very idea of the home/office computer was new. When regular people learned how to use these computers.

    All this pretension that computers are "hard to use", that LLMs are making the impossible possible, it's all ahistoric nonsense. "It would've taken me months!" no, you would've just had to spend a day or two learning the basics of python.

    • I was one of those using MS-DOS (still I remember blue Norton Commander). I didn't understand people mocking it later - as it just worked. Enough to run the Prince of Persia, Doom or so. Or edit text files. (As an excuse, I was just ~7 yo back then.)

This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.

But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.

Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.

  • The credentials-on-device thing is the real blocker for a lot of people. I built atmita.com going the other way: cloud-hosted so nothing lives on your box, OAuth handled on the server, and a safe mode where destructive actions wait for phone approval before they fire. Not based on OpenClaw, built from scratch, so the Docker/token-exfil surface isn't part of the stack.

  • > One can clearly see why OpenAI hired Clawdbot

    Hype, mainly buying Hype before their IPO. The project is open source and the thinking behind it is not difficult, if they truly wanted they could have done it a long time ago or even without the guy. It was a pure hype 'acquisition' of a project that become popular for amateur programmers that got into it through vibe-coding and are unaware of the consequences and security exposure they subject themselves at.

One could argue that the discussion is once again about tech debt.

Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

  • It worked to launch the creator into a gig at OpenAI.

    Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.

    • > It worked to launch the creator into a gig at OpenAI.

      True, but it doesn't scale. No amount of YOLO will let anyone else repeat that feat.

  • OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.

    But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.

  • MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.

$180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.

  • The comparison to smart home gadgetry seems apt to me. I actually want to hack on something LLM agent-related to practice what is clearly a marketable skill, but I can't find anything I'd actually want it to do for me in my real life, other than maybe sort my emails for me, but there's no way I'm going to pipe every one of my emails to an LLM company.

    I remember circa 2015 all my nerdy colleagues were going wild with home automation stuff, and I felt like I wanted to play with it too at first. But then I started to observe that these guys weren't spending less time than me turning on their lights. They were spending way more time than me, in fact, tinkering with their thermostats and curtains. I'm perfectly happy hitting a light switch when I walk in the door.

    I can't envision one of these Telegram bots reliably completing tasks for me. Maybe the closest one would be what I've seen in this thread. Downloading torrents and putting them in Jellyfin for me, but really, I don't hate curating my own media collection.

  • This comparison is dishonest, and you know that it is. This is coming from someone that uses Home Assistant and wouldn’t touch OpenClaw with a 10 foot pole. If I had a horse in this race it’d be your horse, but to pretend that these achieve the same goals is just… not in the spirit of an actual discussion.

    • I have the voice assistant on Mike hooked up to Claude and it does most of the things I’d want OpenClaw to do.

      I’m not generally interested in having it read my email or calendar. I have a digital calendar in the kitchen, and I rarely get important email. I do really enjoy being able to control my house by voice in natural language. I had it set all my lights to Easter colors a while back in a single instruction.

    • Kindly elaborate? Coming from someone who still uses AI mainly to draft emails and raspberry Pi as sandboxed automation project.

DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

Memory isolation is enforced by the MMU. This is not software.

Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)

That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.

  • > DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

    Those arrived with the 386 (286? Don't remember but 386 for sure) and DOS was well alive late into the 386 and even late in the 486 days.

    > For UNIX on the same machines, they also had no such protections.

    I was already running Linux on my 486 before Windows 95 arrived. Linux and DOS. One had those protections, the other didn't.

This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.

Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.

So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.

I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.

Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.

To this day, automatic memory management still feels a little luxurious.

I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.

It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.

I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.

  • When the agent uses your GH credentials to nuke all your projects or put out a lot of crap, this separation will not save you.

    • whitelisting `gh` args should solve it. Event opencode's primitive permission system allows that.

The analogies the author highlights the multi-purpose nature of these machines, which I believe persists to this day, and is why some people have a hard time adopting Linux (Or why UAC was controversial in an older Win version): The conflation of personal computers, and a multi-user IT systems or servers. The IT story of Wal-Mart used to make the analogy is in the latter category. My dad typing up documents for work, or me playing The Lost Mind of Dr Brain and Mario Teaches Typing have different security requirements.

Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.

  • There's a difference between using a thing and understanding how it works. There's a lot of stuff in this that reference things that only hardware and software creators are going to understand, and only if they're deep enough into their craft.

    "Interrupts", for example, are an old concept that is rarely talked about anymore until you get into low-level programming. At a high level, you don't even think about them, let alone talk about them.

The thread went straight to cost/ROI but the article's actual argument is about security architecture: 'sandbox around the whole agent' vs. 'enforce at the tool layer.' OpenClaw/NemoClaw's setup — binding Ollama to 0.0.0.0 across a network namespace, pairing through the chat channel, approving connections at the netns boundary — are each workarounds for a foundation that didn't separate concerns early. The Unix principle wasn't 'wrap your DOS program in a safer shell' — it was address space and identity separation built in from below. Whether local inference is worth $180/mo is a separate question from whether the permission model belongs at the network boundary or at the tool dispatch layer.

That’s a great deal of technical isolation but does little to address the real problem. If the agent has access to both your info (email, files etc) and reads things on say the open internet then it’s vulnerable to prompt injection and Data exfiltration.

And if you remove either access to data or access to internet then you kill a good chunk of usefulness

The value proposition seems clear: OpenClaw lets you speedrun the Why of application security and sandboxing from first principles. Start with putting all of your money and your valuables in a box without a lock stored in a public place. If you learn something from that, you may proceed with the next step.

And MS-DOS was a massive success. Even 'massive' is such an understatement and English probably needs to invent a new word for that level of world-changing business.

So yeah, perhaps it isn't fooling the author, but it doesn't matter for the other billions of people.

And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.

I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.

Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.

I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.

"Fast" is not always a virtue and "efficiency" is not always the only consideration.

Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.

I does not look like it support streaming of responses from llm into channel. Big issue for local inferrence.

Wow. Much security.

I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm

It wasn't entirely DOS's fault. DOS was a relic from the end of single-process single-user era. Corporate took that and bent it to their use instead of settling for something more complex and harder required an entire department to maintain.

*Claw is more like windows 98. Everyone knows it is broken, nobody really cares. And you are almost certainly going to be cryptolocked (or worse) because of it. It isn't a matter of if, but when.

I think we should be giving AI access to something like templeos where there is no permissions and everything runs unrestricted and you can rewrite the os while it's running.

> curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.

Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.

I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.

I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).

Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.

Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.

Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".

Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?

We see major hacks nearly daily know. The cluestick is hammering your head, constantly.

When shall the clue eventually hit the curl-basher?

Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".

Here, a fucking cluestick for the leftpad'ers:

https://wiki.debian.org/Keysigning

(btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)