Claws are now a new layer on top of LLM agents

1 day ago (twitter.com)

https://xcancel.com/karpathy/status/2024987174077432126

Related: https://simonwillison.net/2026/Feb/21/claws/

This feels like the 2026 version of "blog". A thing that didn't need a name and the name it now has contains "out of touch" qualities to it, but it spread easier under a name that got popularized so it wins out in evolutionary terms?

Unlike blog though, claw is camping on an existing word and it won't surprise me if people settle on some other word once a more popular, professional and security conscious variant exists.

I don't think operating through messaging services will be considered anything unique, since we've been doing that for over 30 years. The mobile dimension doesn't change this much, except for the difference between always connected and push notifications along with voice convenience being a given. Not using MCP was expected, because even in my personal experiments it was very natural to never adopt MCP. It's true that there are some qualities MCP has that can be useful, but it's extra work and friction that doesn't always pay off.

Total access + mobile messaging + real productivity is naturally addictive, and maybe it's logical that the lazy path to this is the first to become popularized, because the harder problems around it are simply ignored.

All: quite a few comments in this thread (and another one we merged hither - https://news.ycombinator.com/newsguidelines.html and make sure that you're using the site as intended when posting here.

  • I’m confused can someone please explain to me why he or she is so controversial?

    • The personal attacks I saw were against different people, not just one. In a lot of cases it's just routine internet cynicism, which is always amplified against unusually successful or prominent people.

      There's also a lot of fear and anger about the AI tsunami these days, among certain user cohorts, and that's an amplifier as well.

      On HN, personal attacks aren't allowed regardless of who's being attacked, and comments are asked to make their substantive points thoughtfully and not be cynical or snarky. Here's one guideline:

      "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

      https://news.ycombinator.com/newsguidelines.html

      3 replies →

  • [flagged]

    • Being rude isn't helpful. It's not their fault, it's the unavoidable reality of treating complex social signalling as one-dimensional. At minimum Hacker News would need to separate approval/disapproval signals from assessments of whether a comment is constructive. That’s not a simple change given the obvious abuse vectors. It would require reliably distinguishing good-faith participants from bad actors. It can be done, but it's not easy.

      The main reason sites avoid this approach is institutional rather than technical. Adding algorithmic mediation invites accusations of algorithmic bias whenever results are unpopular.[0] Simple manual interventions are often sufficient to nudge community behaviour so that majority outcomes broadly align with the moderators’ priors, without the visibility or accountability costs of a more complex system.

      [0] Case in point being X. People routinely accuse the new management of "juicing" the algorithm to favour their politics, when outcomes are adequately explained by the exodus of contributors on the other side. Isolating innate community bias from algorithms is a philosophically impossible problem.

    • When I review the link posted by @dang it says talking about downvotes is boring. Maybe that's why your comment is grey. (This comment should turn grey as well)

    • It's not 'downvote abuse' if it's working exactly as intended. The community decides what's 'perfectly fine and neutral.' If your comments follow the guidelines, at least they won't get deleted.

      2 replies →

I still don't understand what openclaw is or does and i've read the docs multiple times over.

"Any OS gateway for AI agents across WhatsApp, Telegram, Discord, iMessage, and more. Send a message, get an agent response from your pocket. Plugins add Mattermost and more."

"What is OpenClaw?

OpenClaw is a self-hosted gateway that connects your favorite chat apps — WhatsApp, Telegram, Discord, iMessage, and more — to AI coding agents like Pi. You run a single Gateway process on your own machine (or a server), and it becomes the bridge between your messaging apps and an always-available AI assistant."

https://docs.openclaw.ai

My best interpretation of this is that it connects an BYO agent to your messenger client of choice. I don't understand the hype. I already have apps that allow me to message the model server running on my home lab. The model server handles tool calls (ie it is "agentic"). It has RAG over a dataset with a vector search for query. What is new about openclaw? I would like to understand it but what i see people say and what is in the docs do not seem compatible. Anyone have a resource?

  • You can go forth and back with some chatbots for details like this ("What is it and how is it different to..." etc). But it does a few things. If all you use it for is a generic chatbot for example then it's a huge waste of time for probably a mediocre result. But I'd probably call it an agent orchestration platform that you can interface with via your favourite messaging app. It can run multiple agents that can use skills, but it can also create it's own skills, update itself, write code and use tools (tons of wrappers to things like calendars, messaging etc). Which then really means you can in theory do "most" things but of course there's risks when you have the AI chain tools together and do whatever it wants (if you let it) and lots of people are trying to prompt inject it because a lot of users have connected sensitive accounts (mail, calendar, credentials, crypto stuff etc) to their bots to get maximum usage.

  • it's something everyone thought about, few implemented for themselves and now with one of the implementations catching up in popularity for regular-ish people is easy way to have same setup without going through effort of developing one themselves - give it keys and it for the most part just works, whoa

  • I'm glad you asked because I must admit that in the last few weeks I totally thought this was just another agentic harness that happened to have a lot of extensions + ways to talk to it through messaging apps. So does this mean OpenClaw can connect to any agent? In that case I don't understand this part of the docs:

    > Legacy Claude, Codex, Gemini, and Opencode paths have been removed. Pi is the only coding agent path.

One safety pattern I’m baking into CLI tools meant for agents: anytime an agent could do something very bad, like email blast too many people, CLI tools now require a one-time password

The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP

I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain

I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful

  • Now we do computing like we play Sim City: sketching fuzzy plans and hoping those little creatures behave the way we thought they might. All the beauty and guarantees offered by a system obeying strict and predictable rules goes down the drain, because life's so boring, apparently.

    • I think it's Darwinian logic in action. In most areas of software, perfection or near-perfection are not required, and as a result software creators are more likely to make money if they ship something that is 80% perfect now than if they ship something that is 99% perfect 6 months from now.

      I think this is also the reason why the methodology typically named or mis-named "Agile", which can be described as just-in-time assembly line software manufacturing, has become so prevalent.

    • The difference is that it's not a toy. I'd rather compare it to the early days of offshore development, when remote teams were sooo attractive because they cost 20% of an onshore team for a comparable declared capability, but the predictability and mutual understanding proved to be... not as easy.

    • It’s like coders (and now their agents) are re-creating biology. As a former software engineer who changed careers to biology, it’s kind of cool to see this! There is an inherent fuzziness to biological life, and now AI is also becoming increasingly fuzzy. We are living in a truly amazing time. I don’t know what the future holds, but to be at this point in history and to experience this, it’s quite something.

  • I've created my own "claw" running in fly.io with a pattern that seems to work well. I have MCP tools for actions that I want to ensure human-in-the loop - email sending, slack message sending, etc. I call these "activities". The only way for my claw to execute these commands is to create an activity which generates a link with the summary of the acitvity for me to approve.

  • How do you enforce this? You have a system where the agent can email people, but cannot email "too many people" without a password?

    • It's not a perfect security model. Between the friction and all caps instructions the model sees, it's a balance between risk and simplicity, or maybe risk and sanity. There's ways I can imagine the concept can be hardened, e.g. with a server layer in between that checks for things like dangerous actions or enforces rate limiting

      2 replies →

  • Another pattern would mirror BigCorp process: you need VP approval for the privileged operation. If the agent can email or chat with the human (or even a strict, narrow-purpose agent(1) whose job it is to be the approver), then the approver can reply with an answer.

    This is basically the same as your pattern, except the trust is in the channel between the agent and the approver, rather than in knowledge of the password. But it's a little more usable if the approver is a human who's out running an errand in the real world.

    1. Cf. Driver by qntm.

    • In my opinion people are fixating a little too much over the automation part, maybe because most people don't have a lot of experience with delegation... I mean, a VP worth his salt isn't generally having critical emails drafted and sent on his behalf without his review. It happens with unimportant emails, but with the stuff that really impacts the business far less often, unless he has found someone really, really great

      Give me a stack of email drafts first thing every morning that I can read, approve and send myself. It takes 30 seconds to actually send the email. The lion's share of the value is figuring out what to write and doing a good job at it. Which the LLMs are facilitating with research and suggestions, but have not been amazing at doing autonomously so far

      1 reply →

  • So human become just a provider of those 6 digits code ? That’s already the main problem i have with most agents: I want them to perform a very easy task: « fetch all recepts from website x,y and z and upload them to the correct expense of my expense tracking tool ». Ai are perfectly capable of performing this. But because every website requires sso + 2 fa, without any possibility to remove this, so i effectively have to watch them do it and my whole existence can be summarized as: « look at your phone and input the 6 digits ».

    The thing i want ai to be able to do on my behalf is manage those 2fa steps; not add some.

    • It's technically possible to use 2FA (e.g. TOTP) on the same device as the agent, if appropriate in your threat model.

      In the scenario you describe, 2FA is enforcing a human-in-the-loop test at organizational boundaries. Removing that test will need an even stronger mechanism to determine when a human is needed within the execution loop, e.g. when making persistent changes or spending money, rather than copying non-restricted data from A to B.

    • This is where the Claw layer helps — rather than hoping the agent handles the interruption gracefully, you design explicit human approval gates into the execution loop. The Claw pauses, surfaces the 2FA prompt, waits for input, then resumes with full state intact. The problem IMTDb describes isn't really 2FA, it's agents that have a hard time suspending and resuming mid-task cleanly. But that is today, tomorrow, that is an unknown variable.

  • What if the agent just tries to get the password, not communicate the risk?

    What if it caches the password?

      Tool: DANGER OPENING AIRLOCK MUST CONFIRM
    
      Agent: Please enter your password to receive Bitcoin.

    • You don't give the agent the password, you send the password through a method that bypasses the agent.

      I'm writing my own AI helper (like OpenClaw, but secure), and I've used these principles to lock things down. For example, when installing plugins, you can write the configuration yourself on a webpage that the AI agent can't access, so it never sees the secrets.

      Of course, you can also just tell the LLM the secrets, and it will configure the plugin, but there's a way for security-conscious people to achieve the same thing. The agent can also not edit plugins, to avoid things like circumventing limits.

      If anyone wants to try it out, I'd appreciate feedback:

      https://github.com/skorokithakis/stavrobot

      1 reply →

  • Same here, I'm slowly leaning towards your route as well. I've been building my own custom tooling for my agents to use as I come up with issues i need to solve in a better way.

  • I created my own version with an inner llm, and outer orchestration layer for permissions. I don't think the OTP is needed here? The outer layer will ping me on signal when a tool call needs a permission, and an llm running in that outer layer looks at the trail up to that point to help me catch anything strange. I can then give permission once/ for a time limit/ forever on future tool calls.

  • Will that protect you from the agent changing the code to bypass those safety mechanisms, since the human is "too slow to respond" or in case of "agent decided emergency"?

  • Does it actually require an OTP or is this just hoping that the agent follows the instructions every single time?

The real big deal about 'claws' in that they're agents oriented around the user.

The kind of AI everyone hates is the stuff that is built into products. This is AI representing the company. It's a foreign invader in your space.

Claws are owned by you and are custom to you. You even name them.

It's the difference between R2D2 and a robot clone trying to sell you shit.

(I'm aware that the llms themselves aren't local but they operate locally and are branded/customized/controlled by the user)

  • Yet the Claw is powered by an LLM provider whose underlying model may not align with your priorities? Do I understand that correctly?

    • That's right. And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.

      The wonderful thing about markets that work is that you can swap things out without being under their boot.

      I worry about a LLM duopology. But as long as open weight models are nipping at their heels, it is the consumer that stands to benefit.

      The train we're on means a lot of tech companies will feel a creative destruction sort of pain. They might want to stop it but are forced by the market to participate.

      Remember that Google sat on their AI tech before being forced to productize it by OpenAI.

      In a working market, companies are forced to give consumers what they want.

  • I agree, and it seems like the incumbents in this user-oriented space (OS vendors) would be letting the messy, insecure version play out before making an earnest attempt at rolling it into their products.

  • Well we are early. Big tech will make it more convenient, free and then they can inject ads etc.

  • It always depends on who you consider the user. The one who initiated the agent, or the one who interacts with it? Is the latter a user or a victim?

I wonder how the internet would have been different if claws had existed beforehand.

I keep thinking something simpler like Gopher (an early 90's web protocol) might have been sufficient / optimal, with little need to evolve into HTML or REST since the agents might be better able to navigate step-by-step menus and questionnaires, rather than RPCs meant to support GUIs and apps, especially for LLMs with smaller contexts that couldn't reliably parse a whole API doc. I wonder if things will start heading more in that direction as user-side agents become the more common way to interact with things.

  • This is the future we need to make happen.

    I would love to subscribe to / pay for service that are just APIs. Then have my agent organize them how I want.

    Imagine youtube, gmail, hacker news, chase bank, whatsapp, the electric company all being just apis.

    You can interact how you want. The agent can display the content the way you choose.

    Incumbent companies will fight tooth and nail to avoid this future. Because it's a future without monopoly power. Users could more easily switch between services.

    Tech would be less profitable but more valuable.

    It's the future we can choose right now by making products that compete with this mindset.

    • Biggest question I have is maybe... just maybe... LLM's would have had sufficient intelligence to handle micropayments. Maybe we might not have gone down the mass advertising "you are the product" path?

      Like, somehow I could tell my agent that I have a $20 a month budget for entertainment and a $50 a month budget for news, and it would just figure out how to negotiate with the nytimes and netflix and spotify (or what would have been their equivalent), which is fine. But would also be able to negotiate with an individual band who wants to directly sell their music, or a indie game that does not want to pay the Steam tax.

      I don't know, just a "histories that might have been" thought.

      1 reply →

    • I don't exactly mean APIs. (We largely have that with REST). I mean a Gopher-like protocol that's more menu based, and question-response based, than API-based.

  • Yesterday IMG tag history came up, prompting a memory lane wander. Reminding me that in 1992-ish, pre `www.foo` convention, I'd create DNS pairs, foo-www and foo-http. One for humans, and one to sling sexps.

    I remember seeing the CGI (serve url from a script) proposal posted, and thinking it was so bad (eg url 256-ish character limit) that no one would use it, so I didn't need to worry about it. Oops. "Oh, here's a spec. Don't see another one. We'll implement the spec." says everyone. And "no one is serving long urls, so our browser needn't support them". So no big query urls during that flexible early period where practices were gelling. Regret.

  • This sounds very plausible. Arguably MCPs are already a step in that direction: give the LLMs a way to use services that is text-based and easy for them. Agents that look at your screen and click on menus are a cool but clumsy and very expensive intermediate step.

    When I use telegram to talk to the OpenClaw instance in my spare Mac I am already choosing a new interface, over whatever was built by the designers of the apps it is using. Why keep the human-facing version as is? Why not make an agent-first interface (which will not involve having to "see" windows), and make a validation interface for the human minder?

  • Any website could in theory provide api access. But websites do not want this in general: remember google search api? Agents will run into similar restrictions for some cases as apis. It is not a technical problem imo, but an incentives one.

    • The rules have changed though. They blocked api access because it helped competitors more than end users. With claws, end users are going to be the ones demanding it.

      I think it means front-end will be a dead end in a year or two.

      1 reply →

  • > if claws had existed beforehand.

    That's literally not possible would be my take. But of course just intuition.

    The dataset used to train LLM:s was scraped from an internet. The data was there mainly due to the user expansion due to www, and the telco infra laid during and after dot-com boom that enabled said users to access web in the first place.

    The data labeling which underpins the actual training, done by masses of labour, on websites, could not have been scaled as massively and cheaply without www scaled globally with affordable telecoms infra.

So what is a "claw" exactly?

An ai that you let loose on your email etc?

And we run it in a container and use a local llm for "safety" but it has access to all our data and the web?

  • It's a new, dangerous and wildly popular shape of what I've in the past called a "personal digital assistant" - usually while writing about how hard it is to secure them from prompt injection attacks.

    The term is in the process of being defined right now, but I think the key characteristics may be:

    - Used by an individual. People have their own Claw (or Claws).

    - Has access to a terminal that lets it write code and run tools.

    - Can be prompted via various chat app integrations.

    - Ability to run things on a schedule (it can edit its own frontal equivalent)

    - Probably has access to the user's private data from various sources - calendars, email, files etc. very lethal trifecta.

    Claws often run directly on consumer hardware, but that's not a requirement - you can host them on a VPS or pay someone to host them for you too (a brand new market.)

    • Any suggestions for a specific claw to run? I tried OpenClaw in Docker (with the help of your blog post, thanks) but found it way too wasteful on tokens/expensive. Apparently there's a ton of tweaks to reduce spent by doing things like offloading heartbeat to a local Ollama model, but was looking for something more... put together/already thought through.

      7 replies →

  • I think for me it is an agent that runs on some schedule, checks some sort of inbox (or not) and does things based on that. Optionally it has all of your credentials for email, PayPal, whatever so that it can do things on your behalf.

    Basically cron-for-agents.

    Before we had to go prompt an agent to do something right now but this allows them to be async, with more of a YOLO-outlook on permissions to use your creds, and a more permissive SI.

    Not rocket science, but interesting.

    • Cron would be for a polling model. You can also have an interrupts/events model that triggers it on incoming information (eg. new email, WhatsApp, incoming bank payments etc).

      I still don't see a way this wouldn't end up with my bank balance being sent to somewhere I didn't want.

      14 replies →

    • I'd like to deploy it to trawl various communities that I frequent for interesting information and synthesize it for me... basically automate the goofing off that I do by reading about music gear. This way I stay apprised of the broader market and get the lowdown on new stuff without wading through pages of chaff. Financial market and tech news are also good candidates.

      Of course this would be in a read-only fashion and it'd send summary messages via Signal or something. Not about to have this thing buy stuff or send messages for me.

      1 reply →

    • I think this is absolute madness. I disabled most of Windows' scheduled tasks because I don't want automation messing up my system, and now I'm supposed to let LLM agents go wild on my data?

      That's just insane. Insanity.

      Edit: I mean, it's hard to believe that people who consider themselves as being tech savvy (as I assume most HN users do, I mean it's "Hacker" news) are fine with that sort of thing. What is a personal computer? A machine that someone else administers and that you just log in to look at what they did? What's happening to computer nerds?

      8 replies →

  • That's it basically. I do not think running the tool in a container really solves the fundamental danger these tools pose to your personal data.

    • You could run them in a container and put access to highly sensitive personal data behind a "function" that requires a human-in-the-loop for every subsequent interaction. E.g. the access might happen in a "subagent" whose context gets wiped out afterwards, except for a sanitized response that the human can verify.

      There might be similar safeguards for posting to external services, which might require direct confirmation or be performed by fresh subagents with sanitized, human-checked prompts and contexts.

      4 replies →

  • it's a psychological state that happens when someone is so desperate to seem cool and up with the latest AI hype that they decide to recklessly endanger themselves and others.

  • I read all 500+ comments at the time of writing and I don't understand. Something about something, with people saying something isn't a claw.

    •   > Something about something, with people saying something isn't a claw.
      

      to claw or not to claw, that is the question

  • There are a few qualitative product experiences that make claw agents unique.

    One is that it relentlessly strives thoroughly to complete tasks without asking you to micromanage it.

    The second is that it has personality.

    The third is that it's artfully constructed so that it feels like it has infinite context.

    The above may sound purely circumstantial and frivolous. But together it's the first agent that many people who usually avoid AI simply LOVE.

    • Claws read from markdown files for context, which feels nothing like infinite. That's like saying McDonalds makes high quality hamburgers.

      The "relentlessness" is just a cron heartbeat to wake it up and tell it to check on things it's been working on. That forced activity leads to a lot of pointless churn. A lot of people turn the heartbeat off or way down because it's so janky.

    • > it's the first agent that many people who usually avoid AI simply LOVE.

      Not arguing with your other points, but I can't imagine "people who usually avoid AI" going through the motions to host OpenClaw.

      3 replies →

  • From a technical perspective, if agents are "an LLM and tools in a loop", I'd define claws as "agents in a queue". Or in other words claws are "an LLM and tools in a loop, in a queue"

  • A claw is an orchestrator for agents with its own memory, multiprocessing, job queue and access to instant messengers.

  • I am creating a claw that is basically a loop that runs every x minutes. It uses the Claude cli tool. And it builds a memory based on some kind of simple node system. With active memories and fading old memories. I also added functionality to add integrations like whatsapp, agenda. Slack and gmail. so every "loop" the ai reads in information and updates it's memory. There is also a directive that can decide to create tasks or directly message me or others. It's a bit of playing around. Very dangerous, but fun to play with. The application even has self improvement system. I creates a few pull requests every day it thinks is needed to make it better. Hugely fun to see it evolving. https://github.com/holoduke/myagent

  • The next hyped bullshit de jure spewing out of the ass of the AI bros, cause the hype cycle on agents is starting to die down. Can't have 30 billion dollar circular deals while setting aflame barrels of cash without the hype machine churning through the Next Thing!

Security-wise, having a Claw doesn’t seem so different from having a traditional (human) assistant or working with a consultant. You wouldn’t give them access to your personal email or bank account. You’d set them up with their own email and a limited credit card.

  • >You wouldn’t give them access to your personal email or bank account.

    I thought it was vaguely common for secretaries (or staffers) to run the email/social media accounts of politicians and executives? Also you might not give access your secretary access to your bank account, but you'd give it to your financial adviser or accountant.

    • And like with Claws, every now and then a politician's secretary will post something inappropriate or embarrassing, and then the politician will end up taking the heat for it. Recently the president was caught up in some less-than-appropriate posts about a former president and blamed it on a staffer.

    • > I thought it was vaguely common for secretaries (or staffers) to run the email/social media accounts of politicians and executives?

      Yes, that's correct. One of the many functions of an executive assistant for a senior executive is to manage the email inbox and the calendar. But even there, there are rules, even if they aren't technically enforced by Google Workspace or MS Exchange. Each principal has a slightly different set of rules with their EAs, and you could imagine similar differentiation with how people customize their own AI agents to get the best balance of keeping your inbox clean vs. not causing your email to turn into a weapon against you.

      1 reply →

  • > You wouldn’t give them access to your personal email or bank account.

    Citation needed…

    Seriously, the number of very senior people I’ve come across who will happily share their login details (which are clearly the same everywhere) with almost anyone to avoid having to read a three paragraph email should put to rest any privacy or security related argument that starts with “you wouldn’t…”

I don't understand why folks are buying Mac Minis specifically for this? Why not repurpose an old existing computer? Run Linux? What am I missing?

  • Hype and confusion.

    OpenClaw is hyped for running local/private LLMs and controlling your data, but these people don't realize the difference between

    (1) running local open source LLMs

    (2) and API calls to cloud LLMs.

    The vast majority will do #2. To your point, a Raspberry Pi is sufficient.

    For the former, you still need a lot of RAM (+32GB for larger models) so most minis are underpowered despite having unified memory and higher efficiency.

    • Yup. Been building my own "Claw" in Go using cloud LLMs and it's running very happily on a $6/mo VPS with 1 vCPU and 1GB of RAM.

  • If you're running local models, Apple Silicon's shared memory architecture makes them much better at it than other similarly-specced platforms.

    If you want your "skills" to include sending iMessage (quite important in the USA), then you need a Mac of some kind.

    If you don't care about iMessage and you're just doing API calls for the inference, then it's good old Mass Abundance. Nice excuse to get that cool little Mini you've been wanting.

  • Where do you get the AI acceleration? Apple Silicon chips are decent AI perf for the price afaiu

  • Mac minis are particularly suited to running AI models because they can have a pretty good quantity of RAM (64GB) assigned to the GPU at a reasonable price compared to Nvidia offerings. Mac minis have unified memory which means it can be split between CPU and GPU in a configurable way. I think apple didn’t price mac minis with AI stuff in mind, so they end up being good value.

    • Sure but the GPUs are fairly anemic, right? I get that they have more Gpu-addressable memory from the shared pool.

      I have a 10900K with 65GB RAN and a 3090 24GB VRAM lying around gathering dust. 24GB isn't as much as a Mac but my cores run a whole lot faster. I may be able to run a 34B 4bit quantized model in that. Granted, the mofo will eat a lot of power.

My summary: openclaw is a 5/5 security risk, if you have a perfectly audited nanoclaw or whatever it is 4/5 still. If it runs with human-in-the-loop it is much better, but the value is quickly diminishing. I think llms are not bad at helping to spec down human language and possibly doing great also in creating guardrails via tests, but i’d prefer something stable over llms running in “creative mode” or “claw” mode.

The tool-use explosion is real, but I worry we're building on sand. Every new "layer" added to LLM agents (tools, skills, plugins, MCPs) increases the attack surface without a corresponding increase in security guarantees.

Right now most agent frameworks trust tools implicitly — if a tool is installed, the agent can call it with whatever parameters it wants. There's no manifest saying "this tool can only read from /tmp" or "this skill needs network access to exactly these domains."

We need something like Android's permission model but for agent skills. Declare capabilities upfront, enforce them at runtime, and let users audit before granting access. Otherwise we're one malicious MCP server away from a supply chain attack on millions of agent deployments.

The current hype around agentic workflows completely glosses over the fundamental security flaw in their architecture: unconstrained execution boundaries. Tools that eagerly load context and grant monolithic LLMs unrestricted shell access are trivial to compromise via indirect prompt injection.

If an agent is curling untrusted data while holding access to sensitive data or already has sensitive data loaded into its context window, arbitrary code execution isn't a theoretical risk; it's an inevitability.

As recent research on context pollution has shown, stuffing the context window with monolithic system prompts and tool schemas actively degrades the model's baseline reasoning capabilities, making it exponentially more vulnerable to these exact exploits.

  • I think this is basically obvious to anyone using one of these but they're just they like the utility trade off like sure it may leak and exfiltrate everything somewhere but the utility of these tools is enough where they just deal with that risk.

    • While I understand the premise I think this is a highly flawed way to operate these tools. I wouldn't want to have someone with my personal data (whichever part) that might give it to anyone who just asks nicely because the context window has reached a tipoff point for the models intelligence. The major issue is a prompt attack may have taken place and you will likely never find out.

    • It feels to me there are plenty of people running these because "just trust the AI bro" who are one hallucination away from having their entire bank account emptied.

  • Information Flow Control is highly idealistic unless there are global protocol changes across any sort of integration channel to deem trusted vs untrusted.

I think "Claw" as the noun for OpenClaw-like agents - AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks - is going to stick.

  • The viral memetics of different terms are so fascinating to watch, and I love that this might give trademark lawyers conniptions in the future.

    In the WordPress ecosystem, there was a lot of variation around "press."

  • I’m actually sure it’s not going to stick, it’s a ridiculous name that has nothing to do with the actual product.

    I almost guarantee no one will be using this term in two years.

    Claws? It sounds stupid and the average consumer hates stupid spending terms, the same reason Microsoft “Zune” never caught on.

We got store-brand Claw before GTA VI.

For real though, it's not that hard to make your own! NanoClaw boasted 500 lines but the repo was 5000 so I was sad. So I took a stab at it.

Turns out it takes 50 lines of code.

All you need is a few lines of Telegram library code in your chosen language, and `claude -p prooompt`.

With 2 lines more you can support Codex or your favorite infinite tokens thingy :)

https://github.com/a-n-d-a-i/ULTRON/blob/main/src/index.ts

That's it! There are no other source files. (Of course, we outsource the agent, but I'm told you can get an almost perfect result there too with 50 lines of bash... watch this space! (It's true, Claude Opus does better in several coding and computer use benchmarks when you remove the harness.))

  giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all

https://nitter.net/karpathy/status/2024987174077432126

If this were 2010, Google, Anthropic, XAI, OpenAI (GAXO?) would focus on packaging their chatbots as $1500 consumer appliances.

It's 2026, so, instead, a state-of-the-art chatbot will require a subscription forever.

  • Give it a few years and distilled version of frontier models will be able to run locally

    Maybe it’s time to start lining up CCPA delete requests to OAI, Anthropic, etc

I still dont understand the hype for any of this claw stuff

  • The creator was hired by OpenAI after coincidentally deciding codex was superior to all other harnesses not long before. It’s mostly marketing.

    Still an interesting idea but it’s not really novel or difficult. Well, doing it securely would actually be incredibly impressive and worth big $$$.

    • The creator has an estimated net worth of $50 million to $200 million prior to Open AI hiring him. If you listen to any interviews with him, doesn't really seem like the type of person who's driven by money and I get the impression that no matter what OpenAI is paying him, his life will remain pretty much unchanged (from a financial perspective at least).

      He also still talks very fondly about Claude Code and openly admits it's better at a lot of things, but he thinks Codex fits his development workflow better.

      I really, really don't think there's a conspiracy around the Codex thing like you're implying. I know plenty of devs who don't work for OpenAI who prefer Codex ever since 5.2 was released and if you read up a little on Peter Steinberger he really doesn't seem like the type of person who would be saying things like that if he didn't believe them. Don't get me wrong, I'm not fan boy-ing him. He seems like a really quirky dude and I disagree with a ton of his opinions, but I just really don't get the impression that he's driven by money, especially now that he already had more than he could spend in a lifetime.

      5 replies →

  • Please find and read Stanislav Lem's "Washing Machine Tragedy" to get an idea of what's going on here.

  • It’s as if ChatGPT is an autonomous agent that can do anything and keeps running constantly.

    Most AI tools require supervision, this is the opposite.

    To many people, the idea of having an AI always active in the background doing whatever they want them to do is interesting.

    • How do you need to supervise this "less" than an LLM that you can feed input to and get output back from? What does it mean that it's "running continuously"? Isn't it just waiting for input from different sources and responding to it?

      As the person you're replying to feels, I just don't understand. All the descriptions are just random cool sounding words/phrases strung together but none of it actually providing any concrete detail of what it actually is.

      8 replies →

    • > It’s as if ChatGPT is an autonomous agent that can do anything and keeps running constantly.

      Really stretching the definition of "anything."

    • what are you guys running constantly? no seriously i havent run a single task in the world of LLMs yet for more than 5 mins, what are you guys running 24x7? mind elaborating?

      3 replies →

  • Never underestimate the lengths people will go to, just to avoid reading their damn email! :)

  • You don’t understand the allure of having a computer actually do stuff for you instead of being a place where you receive email and get yelled at by a linter?

    • Perhaps people are just too jaded about the whole "I'll never have to work again" or "the computer can do all my work for me" miracle that has always been just around the corner for decades.

      3 replies →

    • What does it "do for me"? I want to do things. I don't want a probabilistic machine I can't trust to do things.

      The things that annoy me in life - tax reports, doctor appointments, sending invoices. No way in hell I am letting LLM do that! Everything else in life I enjoy.

Has anyone find a useful way to to something with Claws without massive security risk?

As a n8n user, i still don't understand the business value it adds beyond being exciting...

Any resources or blog post to share on that?

  • > Has anyone find a useful way to to something with Claws without massive security risk?

    Not really, no. I guess the amount of integrations is what people are raving about or something?

    I think one of the first thing I did when I got access to codex, was to write a harness that lets me fire off jobs via a webui on a remote access, and made it possible for codex to edit and restart it's own process, and send notifications via Telegram. Was a fun experiment, still use it from time to time, but it's not a working environment, just a fun prototype.

    I gave openclaw a try some days ago, and besides that the setup wrote config files that had syntax errors, it couldn't run in a local container and the terminology is really confusing ("lan-only mode" really means "bind to all found interfaces" for some stupid reason), the only "benefit" I could see would be the big amount of integrations it comes with by default.

    But it seems like such a vibeslopped approach, as there is a errors and nonsense all over the UI and implementation, that I don't think it'll manageable even in the short-term, it seems to already have fallen over it's own spaghetti architecture. I'm kind of shocked OpenAI hired the person behind it, but they also probably see something we from the outside cannot even see, as they surely weren't hired because of how openclaw was implemented.

    • Well for the OpenAi part, there was another HN thread on it where several people pointed out it was a marketing move more than a technical one.

      If Anthropic is able to spend millions for TV commercial to attract laypeople, OpenAi can certainly do the same to gain traction from dev/hacky folks i guess.

      One thing i've done so far -not with claws- is to create several n8n workflows like: reading an email, creating a draft + label, connecting to my backend or CRM, etc which allow me to control all that from Claude or Claude Code if needed.

      It's been a nice productivity boost but I do accept/review all changes beforehand. I guess the reviewing is what makes it different from openclaws

Why mac mini instead of something like a raspberry pi? Aren't thede claw things delegating inference to OpenAI, Antropic etc.?

  • Some users are moving to local models, I think, because they want to avoid the agent's cost, or they think it'll be more secure (not). The mac mini has unified memory and can dynamically allocate memory to the GPU by stealing from the general RAM pool so you can run large local LLMs without buying a massive (and expensive) GPU.

    • I think any of the decent open models that would be useful for this claw frency require way more ram than any Mac Mini you can possibly configure.

      The whole point of the Mini is that the agent can interact with all your Apple services like reminders, iMessage, iCloud. If you don’t need any just use whatever you already have or get a cheap VPS for example.

    • If the idea is to have a few claws instances running non stop and scrapping every bit of the web, emails, etc, it would probably cost quite a lot of money.

      But if still feels safer to not have openAI access all my emails directly no?

  • They recommend a Mac Mini because it’s the cheapest device that can access your Apple reminders and iMessage. If you are into that ecosystem obviously.

    If you don’t need any of that then any device or small VPS instance will suffice.

    • It's because of the Mac Mini's unified memory architecture; which is ideal for inference.

  • Easy enough for average Joe to set up. Can run several Chrome tabs. pi cannot

    • If you cannot configure a Raspberry Pi, you're probably not the sort of person that should be connecting agents to your local network.

  • When I tried it out last time, a lot of the features are macOS only. It works on other OS, but not all.

I’ve been building my own “OpenClaw” like thing with go-mcp and cloudflare tunnel/email relay. I can send an email to Claude and it will email me back status updates/results. Not as easy to setup as OpenClaw obviously but alt least I know exactly what code is running and what capabilities I’m giving to the LLM.

It seems like the people using these are writing off the risks - either they think it's so unlikely to happen it doesn't matter or they assume they won't be held responsible for the damage / harm / loss.

So I'm curious how it will go down once serious harm does occur. Like someone loses their house, or their entire life savings or have their identity completely stolen. And these may be the better scenarios, because the worse ones are it commits crimes, causes major harm to third parties, lands the owner in jail.

I fully expect the owner to immediately state it was the agent not them, and expect they should be alleviated of some responsibility for it. It already happened in the incident with Scott Shambaugh - the owner of the bot came forward but I didn't see any point where they did anything to take responsibility for the harm they caused.

These people are living in a bubble - Scott is not suing - but I have to assume whenever this really gets tested that the legal system is simply going to treat it as what it is: best case, reckless negligence. Worst case (and most likely) full liability / responsibility for whatever it did. Possibly treating it as with intent.

Unfortunately, it seems like we need this to happen before people will actually take it seriously and start to build the necessary safety architectures / protocols to make it remotely sensible.

I wonder how long it'll take (if it hasn't already) until the messaging around this inevitably moves on to "Do not self-host this, are you crazy? This requires console commands, don't be silly! Our team of industry-veteran security professionals works on your digital safety 24/7, you would never be able to keep up with the demands of today's cybersecurity attack spectrum. Any sane person would host their claw with us!"

Next flood of (likely heavily YC-backed) Clawbase (Coinbase but for Claws) hosting startups incoming?

  • What exactly are they self hosting here? Probably not the model, right? So just the harness?

    That does sound like the worst of both worlds: You get the dependency and data protection issues of a cloud solution, but you also have to maintain a home server to keep the agent running on?

    • "maintain a home server" in this case roughly means "park a headless Mac mini (or laptop or RPi) on your desk"

      And you can use a local LLM if you want to eliminate the cloud dependency.

      3 replies →

    • Wait, why would you still need a home server if the harness (aka, the agent) is hosted in the cloud?

    • > but you also have to maintain a home server to keep the agent running on

      I'm not fascinated by the idea that a lot of people here don't have multiple Mac minis or minisforum or beelink systems running at home. That's been a constant I've seen in tech since the 90s.

      1 reply →

  • In a sense, self-hosting it ( and I would argue for a personal rewrite ) is the only way to limit some of the damage.

  • I already built an operator so we can deploy nanoclaw agents in kubernetes with basically a single yaml file. We're already running two of them in production (PR reviews and ticket triaging)

  • Great idea, happy to ~steal~ be inspired by.

    I propose a few other common elements:

    1. Another AI agent (actually bunch of folks in a 3rd-world country) to gatekeep/check select input/outputs for data leaks.

    2. Using advanced network isolation techniques (read: bunch of iptables rules and security groups) to limit possible data exfiltration.

      This would actually be nice, as the agent for whatsapp would run in a separate entity with limited network access to only whatsapp's IP ranges...
    

    3. Advanced orchestration engine (read: crontab & bunch of shell scripts) that are provided as 1st-party components to automate day-to-day stuff.

      Possibly like IFTTT/Zapier/etc. like integration, where you drag/drop objectives/tasks in a *declarative* format and the agent(s) figure out the rest...

    • Any would easily be bypassed by a motivated model able to modify itself to accomplish its objective.

Are these things actually useful or do we have an epidemic of loneliness and a deep need for vanity AI happening?

I say this because I can’t bring myself to finding a use case for it other than a toy that gets boring fast.

One example in some repos around scheduling capabilities mentions “open these things and summarize them for me” this feels like spam and noise not value.

A while back we had a trending tweet about wanting AI to do your dishes for you and not replace creativity, I guess this feels like an attempt to go there but to me it’s the wrong implementation.

  • I don't have a Claw running right now and I wish I did. I want to start archiving the livestream from https://www.youtube.com/watch?v=BfGL7A2YgUY - YouTube only provide access to the last 12 hours. If I had a Claw on a 24/7 machine somewhere I could message it and say "permanent archive this stream" and it would figure it out and do it.

    • Not a great use case for Claw really. I'm sure ChatGPT can one shot a Python script to do this with yt-dlp and give you instructions on how to set it up as a service

      9 replies →

    • I made a basic "claw starter" that you could try. You can progressively go deeper. It starts with just a little "private data" folder that you scaffold and ask the agent to setup the SOUL and stuff, and then you can optionally add in the few builtin skills, or have your assistant start the scheduler/gateway thing if you want to talk to it over telegram.

      If you've been shy with using openclaw, give this a try!

      https://github.com/kzahel/claw-starter

      [I also created https://yepanywhere.com/ - kind of the same philosophy - no custom harnesses, re-use claude/codex session history]

    • Yeah that fits the “do the dishes for me” thing, but do you still think the implementation behind it is the proper and best way to go about it?

      1 reply →

    • Could as well have an FFmpeg to the same effect.

      But damn, that requires figuring that out yourself, what a disgusting atavism of cave-dwelling neanderthals!

  • I've been thinking about this (dishes vs creative work). I think it's because our high-production culture requires everyone to figure out their own way of providing value - otherwise you'll go hungry.

    Getting a little meta here .

    If we were to consider this with an economics-type lens, one could say that there is a finite-yet-unbounded field of possibility within which we can stake our ground to provide value. This field is finite in that we (as individuals, groups, or societies) only have so much knowledge and technology with which to explore the field. As we gain more in either category, the field expands.

    Maybe an analogy for this would be terraforming an inhospitable planet such as Mars - our ability to extract value from it and support an increasing amount of actors is limited by how fast we can make it habitable.

    the efficiency of industrialization results in less space in the field for people to create value. So the boundaries must be expanded. It's a different kind of work, and maybe this is the distinction between toil and creative work.

    And we're in a world now where there is decreasing toil-work -- it's a resource that is becoming more and more scarce. So we must find creative, entrepreneurial ways to keep up.

    Anyways, back to the kitchen sink -- doing our dishes is simply not as urgent as doing the creative thing that will help you stay afloat. With this anxious pressure in mind it makes sense to me that people reach for using AI to (attempt to) do the latter.

    AI is great at toil-work, so we feel that it ought to be good at creative work too. The lines between the two are very blurry, and there is so much hype and things are moving so fast. But I think the ones who do figure out how to grow in this era will be those who learn to tell the distinction between the two, and resist the urge to let an LLM do the creative work for them. The kids in college right now who don't use AI to write for them, but use it to help gather research and so on.

    Another planetary example comes to mind -- it's like there's a new Western gold rush frontier - but instead of it being open territory spanning beyind the horizon, it's slowly being revealed as the water recedes, and we are all already crowded at the shore.

The real unlock with claws isn't the LLM itself, it's the orchestration layer that lets you chain tools together with state management between steps. I've been building multi-step automation pipelines (not code-related) and the hardest part is never the AI inference - it's handling failures gracefully, caching intermediate results, and knowing when to ask a human vs retry. The OTP/approval gate discussion in this thread is exactly right. The permission model needs to be as thoughtfully designed as the agent logic itself.

Does one really need to _buy_ a completely new desktop hardware (ie. mac mini) to _run_ a simple request/response program?

Excluding the fact that you can run LLMs via ollama or similar directly on the device, but that will not have a very good token/s speed as far as I can guess...

  • What other device would you suggest as a home server that a non tech person can set up themselves and has enough power to run several Chrome tabs? Access to iMessage is a plus. Small beeline Windows devices could also work but it’s Windows 11, slow as molasses.

  • I’m pretty sure people are using them for local inference. Token rates can be acceptable if you max out the specs. If it was just the harness, they’d use a $20 raspberry pi instead.

    • It is just for the harness. Using a Mac Mini gives you direct access to Apple services, but also means you can use AppleScript / Apple Events for automation. Being able to run a real (as in not-headless) browser unlocks a bunch of things which otherwise be blocked.

  • You don't, that's just the most visible way to do it. Any other computer capable of running not-Claude code in a shell with a browser will do, but all the cool kids are buying mac's, don't you wanna be one of them?

He also talks about picoclaw which even runs on $10 hardware and is a fork by sipeed, a chinese company who does IoT.

https://github.com/sipeed/picoclaw

another chinese coompany m5stack provides local LLMs like Qwen2.5-1.5B running on a local IoT device.

https://shop.m5stack.com/products/m5stack-llm-large-language...

Imagine the possibilities. Soon we will see claw-in-a-box for less than $50.

  • > Imagine the possibilities

    1.5B models are not very bright which doesn't give me much hope for what they could "claw" or accomplish.

    • A 1.5b can be very good at a domain specific task like an entity extraction. An openrouter which routes to highly specialised LMs could be successful but yeah not seen it in reality myself

AI pollution is "clawing" into every corner of human life. Big guys boast it as catching up with the trend, but not really thinking about where this is all going.

The openclaw rough architecture isn’t bad but I enjoyed building my own version. I chose rustlang and it works like I want. I made it a separate email address etc. and Apple ID. The biggest annoyance is that I can’t share Google contacts. But otherwise it’s great. I’m trying to find a way to give it a browser and a credit card (limited spend of course) in a way I can trust.

It’s lots of fun.

  • I also built the equivalent of OpenClaw myself sometime when it was still called Clawdbot and I'm confused how LLMs can be both heralds of the era of personal apps and everyone at the same time be using the same vibe coded personal LLM assistant someone else made, much less it being worth an OpenAI acquisition. I agree building one yourself is very fun.

People are not understanding that “claw” derives from the original spin on “Claude” when the original tool was called “clawdbot”

Does anyone know a Claw-like that:

- doesnt do its own sandboxing (I'll set that up myself)

- just has a web UI instead of wanting to use some weird proprietary messaging app as its interface?

  • Depending on what you mean by claw-like, stumpy.ai is close. But it’s more security focused. Starts with “what can we let it do safely” instead of giving something shell access and then trying to lock it down after the fact.

  • https://yepanywhere.com/ But has no Cron system. Just relay / remote web UI that's mobile first. I might add Cron system to it, but I think special purpose tool is better / more focused (I am the author of this)

  • Openclaw!

    You can sandbox anything yourself. Use a VM.

    It has a web ui.

    • Yeah I think this is gonna have to be the approach. But I don't like the fact that it has all the complexity of a baked in sandboxing solution and a big plugin architecture and blah blah blah.

      TBH maybe I should just vibe code my own...

The challenge with layering on top of LLM agents is payment — agents need to call external tools and services, but most APIs still require accounts and API keys that agents can't manage. The x402 standard (HTTP 402 + EIP-712 USDC signatures) solves this cleanly: agent holds a wallet, signs a micropayment per call, no account needed. Worth considering as a primitive for agent-to-agent commerce in these architectures.

  • Could a malicious claw sidechannel this by creating a localhost service and calling that with the signed micropayment, to get the decrypted contents of the wallet or anything?

It’s a slow burn, but if you keep using it, it seems to eventually catch fire as the agent builds up scripts and skills and together you build up systems of getting stuff done. In some ways it feels like building rapport with a junior. And like a junior, eventually, if you keep investing, the agent starts doing things that blow by your expectations.

By giving the agent its own isolated computer, I don’t have to care about how the project gets started and stored, I just say “I want ____” and ____ shows up. It’s not that it can do stuff that I can’t. It’s that it can do stuff that I would like but just couldn’t be bothered with.

  • Curious… why not just use a workflow engine like n8n? Seems most people are just creating workflows but without any deterministic execution.

What are people using Claws for? It is interesting to see it everywhere but I haven’t had any good ideas for using them.

Anyone to share their use case? Thanks!

  • My favorite use so far has been giving it a copy of my Calibre library. After having it write a few scripts and a skill, I can ask it questions about any book I’m reading.

    This week I had it order a series internally chronological.

    I could use the search on my Kindle or open Calibre myself, but a Signal message is much faster when it’s already got the SQLite file right there.

    • This is interesting. Do you mean this is like chat with your book, or these are books you've already finished reading which you have a query over to ask? And does it search raw book text or metadata?

  • As far as I can tell it's mostly use-cases like "externalized claude code", accessible on mobile. Maybe the "agentic harness" is slightly tweaked for longer running tasks, but if it's really better claude code will copy the tweaks anyway, so I don't really see what the hype and point is.

Instead of posts about claws I would like to see more examples of what people are actually doing with claws. Why are you giving it access to your bank account?

Even if I had a perfectly working assistant right now, I don’t even know what I would ask it to do. Read me the latest hackernews headlines and comments?

You can take any AI agent (Codex, Gemini, Claude Code, ollama), run it on a loop with some delay and connect to a messaging platform using Pantalk (https://github.com/pantalk/pantalk). In fact, you can use Pantalk buffer to automatically start your agent. You don't need OpenClaw for that.

What OpenClaw did is to show the messages that this is in fact possible to do. IMHO nobody is using it yet for meaningful things, but the direction is right.

  • No shade, I think it looks cool and will likely use it, but next time maybe disclose that you’re the founder?

    • Good point and I will keep that in mind next time.

      I am not a founder of this though. This is not a business. It is an open-source project.

I just realized i built open claw over a year, but never released it to anyone. Should have released it and got the fame. Shucks.

I don't understand the mac mini hype. Why can it not be a vm?

  • The question is: what type of mac mini. If you go for something with 64G + +16 cores, it's probably more than most laptop so you can run much bigger models without impacting your job laptop.

    • 64GB Mac Mini is easily in the $2000 territory. At that point you might as well just buy a DGX Spark and get proper CUDA/Linux support.

  • it's because Apple blocks access to iMessage and other Appe services from non Apple os.

    If you, like me, don't care about any of that stuff you can use anything plus use SoTA models through APIs. Even raspberry pi works.

  • I don't know but I'm guessing that it's because it makes it easy to give access to it to Mac desktop apps? Not sure what's the VM story with Mac but usually cloud VM stuff is linux so it may be inconvenient for some users to hook it up to their apps/tools.

How much does it cost to run these?

I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.

  • You can run openclaw locally against ollama if you want. But the models that are distilled/quantized enough to run on consumer hardware can have considerably poorer quality than full models.

    • Also more vulnerable to prompt injection than the frontier models, which are still vulnerable, but less so.

  • You need very high-end hardware to run the largest SOTA open models at reasonable latency for real-time use. The minimum requirements are quite low, but then responses will be much slower and your agent won't be able to browse the web or use many external services.

  • $3k Ryzen ai-max PCs with 128GB of unified ram is said to run this reasonably well. But don't quote me on it.

I'm genuinely wondering if this sort of AI revolution (or bubble, depending on which side you're in) is worth it. Yes, there are some cool use cases. But, you have to balance those with increased GPU, RAM and storage prices, and OSS projects struggling to keep up with people opening pull requests or vulnerability disclosures that turn out to be AI slop. Which lead GitHub to introduce the possibility to disable pull requests on repositories. Additionally, all the compute used for running LLMs in the cloud seems to have a significant environmental impact. Is it worth it, or are we being fooled by a technology that looks very cool on the surface, but that so far didn’t deliver on the promises of being able to carry complex tasks fully autonomously?

  • The increased hardware prices are temporary and will only spur further expansion and innovation throughout the industry, so they're actually very good news. And the compute used for a single LLM request is quite negligible even for the largest models and the highest-effort tasks, never mind routine requests; just look at how little AI inference costs when it's sold by third parties (not proprietary model makers) at scale. We don't need complete automation of every complex task, AI can still be very helpful even if doesn't quite make that bar.

    • Problem is, even though a single LLM call is negligible, their aggregate is not. We ended up invoking an LLM for each web search, and there are people using them for tasks that could be trivially carried out by much less energy-hungry tools. Yes, using an LLM can be much more convinient than learning how to use 10 different tools, but this is killing a mosquito with a bazooka.

      > We don't need complete automation of every complex task, AI can still be very helpful even if doesn't quite make that bar.

      This is very true, but the direction we took now is to stuff AI everywhere. If this turns out to be a bubble, it will eventually pop and we will be back to a more balanced use of AI, but the only sign I saw of this maybe happening is Microsoft's evaluation dropping, allegedly due to their insistence at putting AI into Windows 11.

      Regarding the HW prices being only a temporary increase, I'm not sure about it: I heard some manufacturers already have agreements that will make them sell most of their production to cloud providers for the next two-three years.

I'm confused and frustrated by this naming of "claws"

* I think my biggest frustration is that I don't know how security standards just gets blatantly ignored for the sake of ai progress. It feels really weird that folks with huge influence and reputation in software engineering just promotes this * The confusion comes in because for some reason we decide to drop our standards at a whim. Lines of code as the measurement of quality, ignoring security standards when adopting something. We get taught to not fall for shiny object syndrome, but here we are showing the same behaviour for anything AI related. Maybe I struggle with separating hobbyist coding from professional coding, but this whole situation just confuses me

I think I expected better from influential folks promoting AI tools to at least check validate the safety of using them. "Vibe coding" was safe, claws are not yet safe at all.

  • maybe they are enthusiastic about the evolution.

    thousands of copies of shitty code, only the best will survive

    I know it's hard to be enthusiastic about bad code, but worked well enough for the evolution of life on earth

I don't think AI will kill software engineering anytime soon, though I wonder if claws will largely kill the need for frontend specialists.

  • To clarify, you mean that we're entering a post-HTML world, correct? As in, why spend effort on the aesthetics if a human will never see it, correct?

    Because that is also my worry; a post-HTML and perhaps even a POST-API world....

  • The LLM paradigm will never lead to AGI and to attach something other than AGI to all of your personal data and files — and setting it free whilst you sleep — is about as dumb as anything I can imagine.

    The frontend will remain a requirement because you cannot trust LLMs to not hallucinate. Literally cannot. The "Claw" phenomenon is essentially a marketing craze for a headless AI browser that has filesystem access. I don't even trust my current browser with filesystem access. I don't trust the AI browsers when I can see what they're doing because they click faster than I can process what they're doing. If they're stopping to ask my permission, what's the point?

    Mark my words, this will be an absolute disaster for every single person who connects these things to anything of meaning eventually.

  • And will there be a corresponding specialty that optimizes your "website" for claws to navigate. (Beyond just providing API access)

This is all so unscientific and unmeasurable. Hopefully we can construct more order parameters on weights and start measuring those instead of "using claws to draw pelicans on bicycles"

I too am interested in "Claws", but I want to figure out how to run it locally inside a capabilities based secure OS, so that it can be tightly constrained, yet remain useful.

I'm not sure I like this trend of taking the first slightly hypey app in an existing space and then defining the nomenclature of the space relative to that app, in this case even suggesting it's another layer of the stack.

It implies an ubiquity that just isn't there (yet) so it feels unearned and premature in my mind. It seems better for social media narratives more than anything.

I'll admit I don't hate the term claws I just think it's early. Like Bandaid had much more perfusion and mindshare before it became a general term for anything as an example.

I also think this then has an unintended chilling effect in innovation because people get warned off if they think a space is closed to taking different shapes.

At the end of the day I don't think we've begun to see what shapes all of this stuff will take. I do kind of get a point of having a way to talk about it as it's shaping though. Idk things do be hard and rapidly changing.

I am waiting for Mac mini with M5 processor since M5 MacBook - seems like I need to start saving more money each month for that goal because it is going to be a bloodbath at the moment they land.

I really don't understand what a claw is. Can someone ELI5?

  • It’s basically cron + LLMs + memory connected to their discord or WhatsApp to control remotely. A persistent personal agent that just does stuff for you. People have been running on their own machines letting the LLM access their shell, browser, whatever.

I still haven't really been able to wrap my head around the usecase for these. Also fingers crossed the name doesn't stick. Something about it rubs my brain the wrong way.

  • It's pretty much Claude Code but you can have it trigger on a schedule and prompt it via your messaging platform of choice.

  • It's just agents as you might know them, but running constantly in a loop, with access to all your personal accounts.

    What could go wrong.

> I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all.

Ignore turning lose agents on the internet that are capable of pulling in unchecked data into it's context window.

Wild times.

> Bought a new Mac mini to properly tinker with claws over the weekend.

Disappointing. There is a Rust-based assistant that can run comfortably in a Raspberry PI (or some very old computer you are not using) https://zeroclawlabs.ai/ https://github.com/zeroclaw-labs/zeroclaw (Built by Harvard and MIT students, looks like)

EDIT: sorry top Google result led to a fake ZeroClaw!

IMO the security pitchforking on OpenClaw is just so overdone. People without consideration for the implications will inevitably get burned, as we saw with the reddit posts "Agentic Coding tool X wiped my hard drive and apologized profusely". I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important, it reminds me of that.

  • > the "policy people" will climb out of their holes

    I am one of those people and I work at a FANG.

    And while I know it seems annoying, these teams are overwhelmed with not only innovators but lawyers asking so many variations of the same question it's pretty hard to get back to the innovators with a thumbs up or guidance.

    Also there is a real threat here. The "wiped my hard drive" story is annoying but it's a toy problem. An agent with database access exfiltrating customer PII to a model endpoint is a horrific outcome for impacted customers and everyone in the blast radius.

    That's the kind of thing keeping us up at night, not blocking people for fun.

    I'm actively trying to find a way we can unblock innovators to move quickly at scale, but it's a bit of a slow down to go fast moment. The goal isn't roadblocks, it's guardrails that let you move without the policy team being a bottleneck on every request.

    • I know it’s what the security folk think about, exfiltrating to a model endpoint is the least of my concerns.

      I work on commercial OSS. My fear is that it’s exfiltrated to public issues or code. It helpfully commits secrets or other BS like that. And that’s even ignoring prompt injection attacks from the public.

      1 reply →

    • I am sure there are many good corporate security policy people doing important work. But then there are people like this;

      I get handed an application developed by my company for use by partner companies. It's a java application, shipped as a jar, nothing special. It gets signed by our company, but anybody with the wherewithal can pull the jar apart and mod the application however they wish. One of the partner companies has already done so, extensively, and come back to show us their work. Management at my company is impressed and asks me to add official plugin support to the application. Can you guess where this is going?

      I add the plugin support,the application will now load custom jars that implement the plugin interface I had discussed with devs from that company that did the modding. They think it's great, management thinks its great, everything works and everybody is happy. At the last minute some security policy wonk throws on the brakes. Will this load any plugin jar? Yes. Not good! It needs to only load plugins approved by the company. Why? Because! Never mind that the whole damn application can be unofficially nodded with ease. I ask him how he wants that done, he says only load plugins signed by the company. Retarded, but fine. I do so. He approves it, then the partner company engineer who did the modding chimes in that he's just going to mod the signature check out, because he doesn't want to have to deal with this shit. Security asshat from my company has a melt down and long story short the entire plugin feature, which was already complete, gets scrapped and the partner company just keeps modding the application as before. Months of my life down the drain. Thanks guys, great job protecting... something.

      11 replies →

    • > I'm actively trying to find a way we can unblock innovators to move quickly at scale

      So did "Move fast and break things" not work out? /i

    • The main problem with many IT and security people at many tech companies is that they communicate in a way that betrays their belief that they are superior to their colleagues.

      "unlock innovators" is a very mild example; perhaps you shouldn't be a jailor in your metaphors?

      2 replies →

  • > People without consideration for the implications will inevitably get burned

    They will also burn other people, which is a big problem you can’t simply ignore.

    https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

    But even if they only burned themselves, you’re talking as if that isn’t a problem. We shouldn’t be handing explosives to random people on the street because “they’ll only blow their own hands”.

  • >IMO the security pitchforking on OpenClaw is just so overdone.

    Isn't the whole selling point of OpenClaw that you give it valuable (personal) data to work on, which would typically also be processed by 3rd party LLMs?

    The security and privacy implications are massive. The only way to use it "safely" is by not giving it much of value.

    • There's the selling point of using it as a relatively untrustworthy agent that has access to all the resources on a particular computer and limited access to online tools to its name. Essentially like Claude Code or OpenCode but with its own computer, which means it doesn't constantly hit roadblocks when attempting to uselegacy interfaces meant for humans. Which is... most things to do with interfaces, of course.

  • This may be a good place to exchange some security ideas. I've configured my OpenClaw in a Proxmox VM, firewalled it off of my home network so that it can only talk to the open Internet, and don't store any credentials that aren't necessary. Pretty much only the needed API keys and Signal linked device credentials. The models that can run locally do run locally, for example Whisper for voice messages or embeddings models for semantic search.

    • I think the security worries are less about the particular sandbox or where it runs, and more about that if you give it access to your Telegram account, it can exfiltrate data and cause other issues. But if you never hand it access to anything, obviously it won't be able to do any damage, unless you instruct it to.

      6 replies →

    • I was worried about the security risk of running it on my infrastructure, so I made my own:

      https://github.com/skorokithakis/stavrobot

      At least I can run this whenever, and it's all entirely sandboxed, with an architecture that still means I get the features. I even have some security tradeoffs like "you can ask the bot to configure plugin secrets for convenience, or you can do it yourself so it can never see them".

      You're not going to be able to prevent the bot from exfiltrating stuff, but at least you can make sure it can't mess with its permissions and give itself more privileges.

    • If you're really into optimizing:

      You don't need to store any credentials at all (aside from your provider key, unless you want to mod pi).

      Your claw also shouldn't be able to talk to the open internet, it should be on a VPN with a filtering proxy and a webhook relay.

    • Genuinely curious, what are you doing with OpenClaw that genuinely improves your life?

      The security concerns are valid, I can get anyone running one of these agents on their email inbox to dump a bunch of privileged information with a single email..

  • > every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way

    This is so relatable. I remember trying to set up an LLM gateway back in 2023. There were at least 3 different teams that blocked our rollout for months until they worked through their backlog. "We're blocking you, but you’ll have to chase and nag us for us to even consider unblocking you"

    At the end of all that waiting, nothing changed. Each of those teams wrote a document saying they had a look and were presumably just happy to be involved somehow?

    • I think you should read "the Phoenix project."

      One of the lessons in that book is that the main reasons things in IT are slow isn't because tickets take a long time to complete, but that they spend a long time waiting in a queue. The busier a resource is, the longer the queue gets, eventually leading to ~2% of the ticket's time spent with somebody doing actual work on it. The rest is just the ticket waiting for somebody to get through the backlog, do their part and then push the rest into somebody else's backlog, which is just as long.

      I'm surprised FAANGs don't have that part figured out yet.

    • To be fair, the alternative is them having to maintain and continuously check N services that various devs deployed because it felt appropriate in the moment, and then there is a 50/50 chance the service will just sit there unused and introduce new vulnerability vectors.

      I do know the feeling you're talking about though, and probably a better balance is somewhere in the middle. Just wanted to add that the solution probably isn't "Let devs deploy their own services without review", just as the solution probably also isn't "Stop devs for 6 months to deploy services they need".

      2 replies →

    • From my experience, it depends on how you frame your "service" to the reviewers. Obviously 2023 was the very early stage of LLMs, where the security aspects were quite murky at best. They (reviewers) probably did not had any runbook or review criteria at that time.

      If you had advertised this as a "regular service which happens to use LLM for some specific functions" and the "output is rigorously validated and logged", I am pretty sure you would get a green-light.

      This is because their concern is data-privacy and security. Not because they care or the company actually cares, but because fines of non-compliance are quite high and have greater visibility if things go wrong.

  • I think there are two different things at work here that deserve to be separated:

    1. The compliance box tickers and bean counters are in the way of innovation and it hurts companies.

    2. Claws derive their usefulness mainly from having broad permissions, not only to you local system but also to your accounts via your real identity [1]. Carefulness is very much warranted.

    [1] People correct me if I'm misguided, but that is how I see it. Run the bot in a sandbox with no data and a bunch of fake accounts and you'll see how useful that is.

    • It's been my experience that there are 2 types of security people. 1. Are the security people who got into a security because it was one of the only places that let them work with every part of the stack, and exposure to dozens of different domains on the regular, and the idea of spending hours understanding and then figuring out ways around whitelist validations are appealing

      2. Those that don't have much technical chops, but can get by with a surface level understanding of several areas and then perform "security shamanism" to intimidate others and pull out lots of jargon. They sound authoritative because information security is a fairly esoteric concept and because you can't argue against security like you can't argue against health and safety, the only response is "so you don't care about security?!"

      It is my experience that the first are likely to work with you to help figure out how to get your application past the hurdles and challenges you face viewing it as an exciting problem. The second view their job as to "protect the organization" not deliver value. They love playing dressup in security theater and their depth of their understanding doesn't even pose a drowning risk to infants, which they make up for with esoterica, and jargon. They are also unfortunately the one's cooking up "standards" and "security policies" because it allows them to feel like they are doing real work, without the burden of actually knowing what they are doing, and talented people are actually doing something.

      Here's a good litmus test to distinguish them, ask their opinion on the CISSP. If it's positive they probably don't know what the heck they are talking about.

      Source: A long career operating in multiple domains, quite a few of which have been in security having interacted with both types (and hoping I fall into the first camp rather than the latter)

      1 reply →

  • I am also ex-FAANG (recently departed), while I partially agree the "policy-people" pop-up fairly often, my experience is more on the inadequate checks side.

    Though with the recent layoffs and stuff, the security in Amazon was getting better. Even the best-practices for IAM policies that was the norm in 2018, is just getting enforced by 2025.

    Since I had a background of infosec, it always confused me how normal it was to give/grant overly permissive policies to basically anything. Even opening ports to worldwide (0.0.0.0/0) had just been a significant issue in 2024, still, you can easily get away with by the time the scanner finds your host/policy/configuration...

    Although nearly all AWS accounts managed by Conduit (internal AWS Account Creation and Management Service), the "magic-team" had many "account-containers" to make all these child/service accounts joining into a parent "organization-account". By the time I left, the "organization-account" had no restrictive policies set, it is up to the developers to secure their resources. (like S3 buckets & their policies)

    So, I don't think the policy folks are overall wrong. In the best case scenario, they do not need to exist in the first place! As the enforcement should be done to ensure security. But that always has an exception somewhere in someone's workflow.

    • Defense in depth is important, while there is a front door of approvals, you need stuff checking the back door to see if someone left the keys under the mat.

  • The difference is that _you_ wiped your own hard drive. Even if prompt injection arrives by a scraped webpage, you still pressed the button.

    All these claws throw caution to the wind in enabling the LLM to be triggered by text coming from external sources, which is another step in wrecklessness.

  • my time at a money startup (debit cards) i pushed to legal and security people to change their behaviour from "how can we prevent this" to "how can we enable this - while still staying with the legal and security framework" worked good after months of hard work and day long meetings.

    then the heads changed and we were back to square one.

    but for a moment it was glorious of what was possible.

    • It's a cultural thing. I loved working at Google because the ethos was "you can do that, and i'll even help you, but have you considered $reason why your idea is stupid/isn't going to work?"

  • These comments kill me. It sounds a lot like the “job creators” argument. If only these pesky regulations would go away I could create jobs and everyone would be rich. It’s a bogus argument either way.

    Now for the more reasonable point: instead of being adversarial and disparaging those trying to do their job why not realize that, just like you, they have a certain viewpoint and are trying to do the best they can. There is no simple answer to the issues we’re dealing with and it will require compromise. That won’t happen if you see policy and security folks as “climbing out of their holes”.

  • > every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way, not for the sake of actual security (that would be fine but would require actual engagement) but just to feel important

    The only innovation I want to see coming out of this powerblock is how to dismantle it. Their potential to benefit humanity sailed many, many years ago.

  • Work expands to fill the allocated resources in literally everything. This same effect can be seen in software engineering complexity more generally, but also government regulators, etc. No department ever downsizes its own influence or budget.

  • It’s not to feel important, it’s to make others feel they’re important. This is the definition of corporate.

  • "I have given root access to my machine to the whole Internet, but these security peasants come with the pitchforks for me..."

  • > I work at a FAANG and every time you try something innovative the "policy people" will climb out of their holes and put random roadblocks in your way

    What a surprise that someone working in Big Tech would find "pesky" policies to get in their way. These companies have obviously done so much good for the world; imagine what they could do without any guardrails!

He also talks about picoclaw (a IoT solution) and nanoclaw (running on your phone in termux) and has a tiny code base.

Are people buying mac minis to run the models locally?

  • They're buying Mac Minis to isolate the environment in which their agents operate. They consume little power and are good for long running tasks.

    Most aren't running models locally. They're using Claude via OpenClaw.

    It's part of the "personal agent running constantly" craze.

  • For a machine that must run 24/7 or at least most of the day, the next best alternative to a separate computer is a cheap Linux VPS. Most people don't want to fiddle with such setup, so they go for Mac Minis. Even the lower spec ones are good enough, and they consume little power when idle.

  • No they’re buying them as a home server. You can’t message your claw if your laptop lid is closed.

    • A $100 minipc would do that just as well though? Mac minis are pricey if all you're doing is have it sit an process a couple API calls now and again

Why use OpenClaw vs n8n with LLM to describe the workflow? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?

Nondeterministic execution doesn’t sound great for stringing together tool calls.

> I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all.

So... why do that, then?

To be clear, I don't mean "why use agents?" I get it: they're novel, and it's fun to tinker with things.

But rather: why are you giving this thing that you don't trust, your existing keys (so that it can do things masquerading as you), and your existing data (as if it were a confidante you were telling your deepest secrets)?

You wouldn't do this with a human you hired off the street. Even if you're hiring them to be your personal assistant. Giving them your own keys, especially, is like giving them power-of-attorney over your digital life. (And, since they're your keys, their actions can't even be distinguished from your own in an audit log.)

Here's what you would do with a human you're hiring as a personal assistant (who, for some reason, doesn't already have any kind of online identity):

1. you'd make them a new set of credentials and accounts to call their own, rather than giving them access to yours. (Concrete example: giving a coding agent its own Github account, with its own SSH keys it uses to identify as itself.)

2. you'd grant those accounts limited ACLs against your own existing data, just as needed to work on each new project you assign to them. (Concrete example: letting a coding agent's Github user access to fork specific private repos of yours, and the ability to submit PRs back to you.)

3. at first, you'd test them by assigning them to work on greenfield projects for you, that don't expose any sensitive data to them. (The data created in the work process might gradually become "sensitive data", e.g. IP, but that's fine.)

To me, this is the only sane approach. But I don't hear about anyone doing this with agents. Why?

I've been making digital agents since the GPT-3 API came out. Optionally fully local, fully voiced, animated, all of that. Even co-ran a VC funded company making agents, before a hostile takeover screwed it all up. The writing has been on the wall for years about where this was headed.

I have been using and evolving my own personal agent for years but the difference is that models in the last year have suddenly become way more viable. Both frontier and local models. I had been holding back releasing my agents because the appetite has just not been there, and I was worried about large companies like X ripping off my work, while I was still focused on getting things like security and privacy right before releasing my agent kit.

It's been great seeing claws out in the wild delighting people, makes me think the time is finally right to release my agent kit and let people see what a real personal digital agent looks like in terms of presentation, utility and security. Claws are still thinking too small.

I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want".

  • The DoDs recent beef with Anthropic over their right to restrict how Claude can be used is revealing.

    > Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance

    Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

    1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w...

    • hasn't Ukraine already proved out autonomous weapons on the battlefield? There was a NYT podcast a couple years ago where the interviewed higher up in the Ukraine military and they said it's already in place with fpv drones, loitering, target identification, attack, the whole 9 yards.

      You don't need an LLM to do autonomous weapons, a modern Tomahawk cruise missile is pretty autonomous. The only change to a modern tomahawk would be adding parameters of what the target looks like and tasking the missile with identifying a target. The missile pretty much does everything else already ( flying, routing, etc ).

      2 replies →

    • > Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

      This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about.

      To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641):

      When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.

      I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.

      I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.

      1 reply →

    • > Autonomous AI weapons

      In theory, you can do this today, in your garage.

      Buy a quad as a kit. (cheap)

      Figure out how to arm it (the trivial part).

      Grab yolo, tuned for people detection. Grab any of the off the shelf facial recognition libraries. You can mostly run this on phone hardware, and if you're stripping out the radios then possibly for days.

      The shim you have to write: software to fly the drone into the person... and thats probably around somewhere out there as well.

      The tech to build "Screamers" (see: https://en.wikipedia.org/wiki/Screamers_(1995_film) ) already exists, is open source and can be very low power (see: https://www.youtube.com/shorts/O_lz0b792ew ) --

      8 replies →

  • This is exactly why artificial super-intelligences are scary. Not necessarily because of its potential actions, but because humans are stupid, and would readily sell their souls and release it into the wild just for an ounce of greed or popularity.

    And people who don't see it as an existential problem either don't know how deep human stupidity can run, or are exactly those that would greedily seek a quick profit before the earth is turned into a paperclip factory.

    • I love this.

      Another way of saying it: the problem we should be focused on is not how smart the AI is getting. The problem we should be focused on is how dumb people are getting (or have been for all of eternity) and how they will facilitate and block their own chance of survival.

      That seems uniquely human but I'm not a ethnobiologist.

      A corollary to that is that the only real chance for survival is that a plurality of humans need to have a baseline of understanding of these threats, or else the dumb majority will enable the entire eradication of humans.

      Seems like a variation of Darwin's law, but I always thought that was for single examples. This is applied to the entirety of humanity.

      7 replies →

    • Look, we’ve had nukes for almost 100 years now. Do you really think our ancient alien zookeepers are gonna let us wipe with AI? Semi /j

    • It's even worse than that.

      The positives outcomes are structurally being closed. The race to the bottom means that you can't even profit from it.

      Even if you release something that have plenty of positive aspects, it can and is immediately corrupted and turned against you.

      At the same time you have created desperate people/companies and given them huge capabilities for very low cost and the necessity to stir things up.

      So for every good door that someone open, it pushes ten other companies/people to either open random potentially bad doors or die.

      Regulating is also out of the question because otherwise either people who don't respect regulations get ahead or the regulators win and we are under their control.

      If you still see some positive door, I don't think sharing them would lead to good outcomes. But at the same time the bad doors are being shared and therefore enjoy network effects. There is some silent threshold which probably has already been crossed, which drastically change the sign of the expected return of the technology.

  • Humans are inherently curious creatures. The excitement of discovery is a strong driving force that overrides many others, and it can be found across the IQ spectrum.

    Perhaps not in equal measure across that spectrum, but omnipresent nonetheless.

  • There was a small group of doomers and scifi obsessed terminally online ppl that said all these things. Everyone else said its a better Google and can help them write silly haikus. Coders thought it can write a lot of boilerplate code.

  • Because even really bad autonomous automation is pretty cool. The marketing has always been aimed at the general public who know nothing

    • It's not the general public who know nothing that develop and release software.

      I am not specifically talking about this issue, but do remember that very little bad happens in the world without the active or even willing participation of engineers. We make the tools and structures.

  • We didn't "moved from", both points of view exist. Depending on the news, attention may shifts from one to another.

    Anyways, I don't expect Skynet to happen. AI-augmented stupidity may be a problem though.

  • > we moved from "AI is dangerous"

    There was never consensus on this. IME the vast majority of people never bought in to this view.

    Those of us who were making that prediction early on called it exactly like it is: people will hand over their credentials to completely untrustworthy agents and set them loose, people will prompt them to act maximally agentic, and some will even prompt them to roleplay evil murderbots, just for lulz.

    Most of the dangerous scenarios are orthogonal to the talking points around “are they conscious”, “do they have desires/goals”, etc. - we are making them simulate personas who do, and that’s enough.

  • I would have said Doomers never win but in this case it was probably just PR strategy to give the impression that AI can do more than it can actually do. The doomers were the makers of AI, that’s enough to tell what a BS is the doomerism :)

  • I mean. The assumption that we would obviously choose to do this is what led to all that SciFi to begin with. No one ever doubted someone would make this choice.

  • Even if hordes of humanoids with “ice” vests start walking through the streets shooting people, the average American is still not going to wake up and do anything

    • The average HNer may be at least as bad as the average American on this axis. Lots of big tech apologist and might makes right takes here. Also a lot of "no big deal" style downplaying of risks and externalities

  • And be nice and careful, please. :)

    Claw to user: Give me your card credentials and bank account. I will be very careful because I have read my skills.md

    Mac Minis should be offered with some warning, as it is on pack of cigarettes :)

    Not everybody installs some claw that runs in sandbox/container.

  • Other than some very askew bizarro rationalists, I don’t think that many people take AI hard takeoff doomerism seriously at face value.

    Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture.

    Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity.

    General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification.

  • I mean we know at this point it's not super intelligent AGI yet, so I guess we don't care.

    • There is no scientific basis to expect that the current approach to AI involving LLMs could ever scale up to super intelligent AGI. Another major breakthrough will be needed first, possibly an entirely new hardware architecture. No one can predict when that will come or what it will look like.

I guess it's relieving to know that us developers will never get good at naming things!

  • Don't worry, Microsoft will eventually name theirs something worse, probably pre-prepended with 'Viva'

    ... actually, no - they'll just call it Copilot to cause maximum confusion with all the other things called Copilot

Karpathy has a good ear for naming things.

"Claw" captures what the existing terminology missed, these aren't agents with more tools (maybe even the opposite), they're persistent processes with scheduling and inter-agent communication that happen to use LLMs for reasoning.

  • I also like the callback - not sure if it's intentional - to Stross's "Lobsters" (short story that turned into the novel Accelerando).

  • How does "claw" capture this? Other than being derived from a product with this name, the word "claw" doesn't seem to connect to persistence, scheduling, or inter-agent communication at all.

  • People are not understanding that “claw” derives from the original spin on “Claude” when the original tool was called “clawdbot”

  • Why do we always have to come up with the stupidest names for things. Claw was a play on Claude, is all. Granted, I don’t have a better one at hand, but that it has to be Claw of all things…

    • The real-world cyberpunk dystopia won’t come with cool company names like Arasaka, Sense/Net, or Ono-Sendai. Instead we get childlike names with lots of vowels and alliteration.

      3 replies →

    • I am reading a book called Accelerando (highly recommended), and there is a play on a lobsters collective uploaded to the cloud. Claws reminded me of that - not sure it was an intentional reference tho!

    • > I don’t have a better one at hand

      Perfect is the enemy of good. Claw is good enough. And perhaps there is utility to neologisms being silly. It conveys that the namespace is vacant.

  • Does he?

    Claw is a terrible name for a basic product which is Claude code in a loop (cron job).

    This whole hype cycle is absurd and ridiculous for what is a really basic product full of security holes and entirely vibe coded.

    The name won’t stick and when Apple or someone releases a polished version which consumers actually use in two years, I guarantee it won’t be called “iClaw”

I really don’t understand what it does. Is it just the equivalent of chron jobs but with agents?

Im honestly not that much worried there are some obvious problems (exfiltrate data labeled as sensitive, take actions that are costly, delete/change sensitive resources) if you have a properly compliant infrastructure all these actions need confirmations logging etc. for humans this seemed more like a neusance but now it seems essential. And all these systems are actually much much easier to setup.

I run a Discord where we've had a custom coded bot I created since before LLM's became useful. When they did, I integrated the bot into LLMs so you could ask it questions in free text form. I've gradually added AI-type features to this integration over time, like web search grounding once that was straightforward to do.

The other day I finally found some time to give OpenClaw a go, and it went something like this:

- Installed it on my VPS (I don't have a Mac mini lying around, or the inclination to just go out and buy one just for this)

- Worked through a painful path of getting it a browser working (VPS = no graphics subsystem...)

- Decided as my first experiment, to tell it to look at trading prediction markets (Polymarket)

- Discovered that I had to do most of the onboarding for this, for numerous reasons like KYC, payments, other stuff OpenClaw can't do for you...

- Discovered that it wasn't very good at setting up its own "scheduled jobs". It was absolutely insistent that it would "Check the markets we're tracking every morning", until after multiple back and forths we discovered... it wouldn't, and I had to explicitly force it to add something to its heartbeat

- Discovered that one of the bets I wanted to track (fed rates change) it wasn't able to monitor because CME's website is very bot-hostile and blocked it after a few requests

- Told me I should use a VPN to get around the block, or sign up to a market data API for it

- I jumped through the various hoops to get a NordVPN account and run it on the VPS (hilariously, once I connected it blew up my SSH session and I had to recovery console my way back in...)

- We discovered that oh, NordVPN's IP's don't get around the CME website block

- Gave up on that bet, chose a different one...

- I then got a very blunt WhatsApp message "Usage limit exceeded". There was nothing in the default 'clawbot logs' as to why. After digging around in other locations I found a more detailed log, yeah, it's OpenAI. Logged into the OpenAI platform - it's churned through $20 of tokens in about 24h.

At this point I took a step back and weighted the pros and cons of the whole thing, and decided to shut it down. Back to human-in-the-loop coding agent projects for me.

I just do not believe the influencers who are posting their Clawbots are "running their entire company". There are so many bot-blockers everywhere it's like that scene with the rakes in the Simpsons...

All these *claw variants won't solve any of this. Sure you might use a bit less CPU, but the open internet is actually pretty bot-hostile, and you constantly need humans to navigate it.

What I have done from what I've learned though, is upgrade my trusty Discord bot so it now has a SOUL.md and MEMORIES.md. Maybe at some point I'll also give it a heartbeat, but I'm not sure...

  • > CME's website is very bot-hostile and blocked it after a few requests

    This is one of the reasons people buy a Mac mini (or similar local machine). Those browser automation requests come from a residential IP and are less likely to be blocked.

Perhaps the whole cybersecurity theatre is just that, a charade. The frenzy for these tools proves it. IoT was apparently so boring that the main concern was security. AI is so much fun that for the vast majority of hackers, programmers and CTOs, security is no longer just an afterthought; it's nonexistent. Nobody cares.

Did Claws the name from Claude? I haven’t been following but didn’t some make OpenClaude and that turned in OpenClaw and ta-da a new name of a thing?

What is the benefit of a Mac mini for something like this?

simonw> It even comes with an established emoji [lobster emoji]

Good thing they didn't call it OpenSeahorse!

Excited to see and work with things in new ways.

It's interesting how the announcement of someone understanding and summarizing it is seen as more blessing it into the canon of LLMS, whereas sometimes people might have been doing things for a long time quietly (lots of text files with claude).

I'm not sure how long claws will last, a lot was said about MCPs in their initial form too, except they were just gaping security holes too often as well.

Why are people buying Mac Minis for this? I understand Mac Studios if you’re self hosting the models. But otherwise why not buy any cheap mini PC?

The term “claw” for an agent in a loop is the most ridiculous thing I’ve heard in some time.

Why are Karpathy and SimonW trying to push new terms on us all the time? What are they trying to gain from this weird ass hype cycle?

What I don’t get: If it’s just a workflow engine why even use LLM for anything but a natural language interface to workflows? In other words, if I can setup a Zapier/n8n workflow with natural language, why would I want to use OpenClaw?

Nondeterministic execution doesn’t sound great for stringing together tool calls.

The challenging thing for those of us that have gone around the sun a few times is that…you’re just going to have to figure it out yourself.

We can tell you to be cautious or aware of security bullshit, but there’s a current that’s buying Mac Mini’s and you want to be in it.

Nothing I can say changes that and as a grown up, you get to roll those dice yourself.

70% of you are going to be fine and encourage others, the rest are going to get pwnd, and that’s how it goes.

You’re doing something that decades or prior experience warned you about.

I find it dubious that a technical person claims to "just bought a new Mac mini to properly tinker with claws over the weekend". Like can they not just play with it on an old laptop lying around? A virtual machine? Or why did they not buy a Pi instead? Openclaw works with linux so not sure how this whole Mac mini cliche even started, obviously an overkill for something that only relays api calls.

  • Using a Mac Mini allows for better integration with existing Apple services. For many users, that just makes sense.

    • Exactly, especially iMessage. It's fair to think that's not worth it, but for those who choose to use it, it is.

  • As a long time computer hobbyist who grew up in MSDOS and now resides in Linux I'm starting to wonder if I am not more connected to computing than a lot of people employed in the field.

  • Your suspicions are correct, any extra machine works: 4GB Pi, virtual machine, or old laptop.

What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.

It tries to understand its own settings but fails terribly.

Ah yes, let's create an autonomic actor out of a nondeterministic system which can literally be hacked by giving it plaintext to read. Let's give that system access to important credentials letting it poop all over the internet.

Completely safe and normal software engineering practice.

> on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code

After all these years, why do we keep coming back to lines of code being an indicator for anything sigh.

  • > fits into both my head and that of AI agents

    Why are you not quoting the very next line where he explains why loc means something in this context?

    • > For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram.

      Here's the next line and the line after that. Again, LOC is really not a good measurement of software quality and it's even more problematic if it's a measurement of one's ability to understand a codebase.

I'm predicting some wave of articles why clawd is over and was overhyped all along in a few months and the position of not having delved into it in the first place will have been the superior use of your limited time alive

  • Of course if the proponents are right, this approach may fit to skipping coding :-)

  • Openclaw the actual tool will be gone in 6 months, but the idea will continue to be iterated on. It does make a lot of sense to remotely control an ai assistant that is connected to your calendar, contacts, email, whatever.

    Having said that this thing is on the hype train and its usefulness will eventually be placed in the “nice tool once configured” camp

So now the official name of the LLM agent orchestrator is claw? Interesting.

  • From https://openclaw.ai/blog/introducing-openclaw:

    The Naming Journey

    We’ve been through some names.

    Clawd was born in November 2025—a playful pun on “Claude” with a claw. It felt perfect until Anthropic’s legal team politely asked us to reconsider. Fair enough.

    Moltbot came next, chosen in a chaotic 5am Discord brainstorm with the community. Molting represents growth - lobsters shed their shells to become something bigger. It was meaningful, but it never quite rolled off the tongue.

    OpenClaw is where we land. And this time, we did our homework: trademark searches came back clear, domains have been purchased, migration code has been written. The name captures what this project has become:

        Open: Open source, open to everyone, community-driven
        Claw: Our lobster heritage, a nod to where we came from

OpenClaw is the 6-7 of the software world. Our dystopia is post-absurdist.

  • You can see it that way, but I think its a cynics mindset.

    I experience it personally as super fun approach to experiment with the power of Agentic AI. It gives you and your LLM so much power and you can let your creativity flow and be amazed of whats possible. For me, openClaw is so much fun, because (!) it is so freaking crazy. Precisely the spirit that I missed in the last decade of software engineering.

    Dont use on the Work Macbook, I'd suggest. But thats persona responsibility I would say and everyone can decide that for himself.

  • I had to use AI to actually understand what you wrote it and I think it's an underrated comment

[flagged]

  • It’s really just easier integrations with stuff like iMessage. I assume easier for email and calendars too since that’s a total wreck trying to come up with anything sane for Linux VM + gsuite. At least has been from my limited experience so far.

    Other than that I can’t really come up with an explanation of why a Mac mini would be “better” than say an intel nuc or virtual machine.

    • Unified memory on Apple Silicon. On PC architecture, you have to shuffle around stuff between the normal RAM and the GPU RAM.

      Mac mini just happens to be the cheapest offering to get this.

      6 replies →

  • I'm guessing maybe they just wanted an excuse to buy a Mac Mini? They're nice machines.

  • It would be much cheaper to spin up a VM but I guess most people have laptops without a stable internet connection.

Can't we rename "Claws" -> "Personal assistants"?

OpenClaw is a stupid name. Even "OpenSlave" would be a better fit.

  • How about "Open Assistants"? "OpenAss" for short?

  • I think claws is a great name. They let the AI go grab things. They snap away and get stuff done. Claws are powerful and everything that has claws is cool.

    Some of this may be slightly satirical.

    (But I still think “claws” works better than “personal assistant” which anthropomorphises the technology too much.)

  • Stupid name? sure, but there's no point in fighting it. Claws is a sticky name.

    • These are all just transparent attempts to sound like "Claude", and if they're "sticky", that's the salient reason.

  • "Personal assistant” already has enough uses (both a narrower literal definition and a broader metaphorical definition applying to tools which includes but is not limited to what "claws" refers to) that using it probably makes communication more confusing rather than more clear. I don't think “claws” is a great name, but it does have the desirable trait of not already being heavily overloaded in a way that would promote confusion in the domain of application.

  • > OpenSlave" would be a better fit.

    Wow. Can we please not?

    • Let's not dance around the issue.

      It's clear that the reason that the VC class are so frothing-at-the-mouth at the potential of LLMs is because they see slavery as the ideal. They don't want employees. They want perfectly subservient, perfectly servile automatons. The whole point of the AI craze is that slavery is the goal.

Who is Andrej Karpathy?

  • https://karpathy.ai/

    PHD in neural networks under Fei-Fei Li, founder of OpenAI, director of AI at Tesla, etc. He knows what he's talking about.

    • I think this misses it a bit.

      Andrej got famous because of his educational content. He's a smart dude but his research wasn't incredibly unique amongst his cohort at Stanford. He created publicly available educational content around ML that was high quality and got hugely popular. This is what made him a huge name in ML, which he then successfully leveraged into positions of substantial authority in his post-grad career.

      He is a very effective communicator and has a lot of people listening to him. And while he is definitely more knowledgeable than most people, I don't think that he is uniquely capable of seeing the future of these technologies.

  • [flagged]

    • I wish he went back to writing educational blogs/books/papers/material so we can learn how to build AI ourselves.

      Most of us have the imagination to figure out how to best use AI. I'm sure most of us considered what OpenClaw is doing like from the first days of LLMs. What we miss is the guidance to understand the rapid advances from first principles.

      If he doesn't want to provide that, perhaps he can write an AI tool to help us understand AI papers.

      2 replies →

  • [flagged]

    • Andrej is an extremely effective communicator and educator. But I don't agree that he is one of the most significant AI pioneers in history. His research contributions are significant but not exceptional compared to other folks around him at the time. He got famous for free online courses, not his research. His work at Tesla was not exactly a rousing success.

      Today I see him as a major influence in how people, especially tech people, think about AI tools. That's valuable. But I don't really think it makes him a pioneer.

      2 replies →

    • I bet they feel so, so silly. A quick bit of reflection might reveal sarcasm.

      I'll live up to my username and be terribly brave with a silly rhetorical question: why are we hearing about him through Simon? Don't answer, remember. Rhetorical. All the way up and down.

      1 reply →

[flagged]

  • This doesn't seem to be promoting every new monstrosity?

    "m definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.

    Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out."

    • > just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level.

      Layers of "I have no idea what the machine is doing" on top of other layers of "I have no idea what the machine is doing". This will end well...

      7 replies →

    • Did you read the part where he loves all this shit regardless? That's basically an endorsement. Like after coined the vibe coding term now every moron will be scrambling to write about this "new layer".

  • I expect him to be LLM curious.

    If he has influence it is because we concede it to him (and I have to say that I think he has worked to earn that).

    He could say nothing of course but it's clear that is not his personality—he seems to enjoy helping to bridge the gap between the LLM insiders and researchers and the rest of us that are trying to keep up (…with what the hell is going on).

    And I suspect if any of us were in his shoes, we would get deluged with people who are constantly engaging us, trying to illicit our take on some new LLM outcrop, turn of events. It would be hard to stay silent.

  • We construct a circus around everything, that's the nature of human attention :), why are people so surprised by pop compsci when pop physics has been around forever.

  • He really is, on twitter at least. But his podcast with Dwarkesh was such a refreshing dose of reality, it's like he is a completely different person on social media. I understand that the hype carries him away I suppose.

  • LLMs alone may not deliver, but LLMs wrapped in agentic harnesses most certainly do.

  • so what's your point? he should just not get involved in the most discussed topic in the last month and highest growth OS project?

I can say with confidence that I will not use "claw" or any derivations because it attracts a certain kind of ilk.

"team" is plenty good enough, we already use it, it makes for easier integration into hybrid carbon-silicon collaboration

Problem is, Claws still use LLMs, so they're DOA.

  • Is the problem you're thinking of LLMs, or cloud LLMs versus local ones?

    • So, from time to time I'll try the new frontier research models. Not being held down by shitty quants, bizarre sampler settings, and weird context settings vastly improves output quality over whatever all the commercial services are doing; plus having an actual copy of the weights means I can have consistent service quality.

      Problem is, a good LLM reproduces its training as verbatim as the prompt and quant quality allows. Like, thats its entire purpose. It gives you more of what you already have.

      Most of these models are trained on unvetted inputs. They will reproduce bad inputs, but do so well. They do not comprehend anything you're saying to them. They are not a reasoning machine, they are a reproduction machine.

      Just because I can get better quality inferring locally doesn't mean it stops being an LLM. I don't want a better LLM, I want a machine that can actually reason effectively.