Show HN: NanoClaw – “Clawdbot” in 500 lines of TS with Apple container isolation
6 days ago (github.com)
I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me.
OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context.
This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours.
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
The problem with LLM-written is that I run into so many README.md's where it's clear the author barely read the thing they're expecting me to read and it's got errors that waste my time and energy.
I don't mind it if I have good reason to believe the author actually read the docs, but that's hard to know from someone I don't know on the internet. So I actually really appreciate if you are editing the docs to make them sound more human written.
5 replies →
”I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me.” - AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done.
38 replies →
Hey, you do you, I’m glad you appreciate my perspective. I wasn’t trying to catch you out but I see how it came across that way - I apologise for my edit, I had hoped the ;) would show that I meant it in jest rather than in meanness but I shouldn’t have added it in the first place.
As I said in my comment, no shade for writing the code with Claude. I do it too, every day.
I wasn’t “irked” by the readme, and I did read it. But it didn’t give me a sense that you had put in “time and effort” because it felt deeply LLM-authored, and my comment was trying to explore that and how it made me feel. I had little meaningful data on whether you put in that effort because the readme - the only thing I could really judge the project by - sounded vibe coded too. And if I can’t tell if there has been care put into something like the readme how can I tell if there’s been care put into any part of the project? If there has and if that matters - say, I put care into this and that’s why I’m doing a show HN about it - then it should be evident and not hidden behind a wall of LLM-speak! Or at least; that’s what I think. As I said in a sibling comment, maybe I’m already a dinosaur and this entire topic won’t matter in a few years anyway.
10 replies →
For example - I checked src/, and there’s clearly more than ~500 lines of code, ignoring the other dirs. I’m on mobile, maybe someone else can run wc -l on the repo and confirm. Is there a reason this number is inaccurately stated? Immediately makes me wary of the vibe coded nature of it.
So you created a project, implicitly to help individuals keep their computers and credentials secure, but you can’t be bothered to proofread a read me?
I get using AI, I do all day everyday day it feels like, but this comes off as not having respect for others time.
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
I get where you're coming from. It's like a person signing a love letter with a stamped signature or something.
1 reply →
Why do you think people do not care about something if they AI generated it? I care about many things I've generated.
3 replies →
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
Been writing code professionally for almost 3 decades.
Not one line of code I wrote 20 years ago has the same economic value as East German currency.
All code is social ephemera. Ethno objects. It lacks intrinsic value of something like indoor plumbing.
It's electrical state in a machine. Our only real goal was convince people the symbols on the screen were coupled to some real world value while it is 100% decoupled from whatever real physical quantity we are tracking.
We all been Frank from Always Sunny; we make money, line go up. We don't define truth. The churn of physics does that.
1 reply →
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
https://news.ycombinator.com/item?id=46850317
I don’t want to come off like I’m shitting on the poster here. I’ve definitely made that kind of careless mistake, probably a dozen times this week. And maybe we’re heading to a future where nobody even reads the readme anymore because they won’t be needed because an agent can just conjure one from the source code at will, so maybe it actually straight up doesn’t matter. I’ve just been thinking about what it means to release software nowadays, and I think the window for releasing software for clout and credit is closing, since creating software basically requires a Claude subscription and an idea now, so fewer people are impressed by the thing simply existing, and the standard of care for a project released for that aim (of clout) needs to be higher than it maybe needed to be in the past. But who knows, I’m probably already a dinosaur in today’s world, and I really don’t mean to shit on the OP - it’s a good idea for a project and it makes a lot of sense for it to exist. I just can’t tell if any actual care has gone into it, and if not, why promote?
2 replies →
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
I am confused by “senior-learning engineer”; so he’s learning as a senior, learning at a “senior” level in a “continuous learning”, “life long learning” kind of way? What is senior-learning? Searching for it only comes up with learning for seniors programs.
1 reply →
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
As a practical matter, if it tones down the AI sleuthing vs. reading, it might be a good idea.
Assuming the written/generated text is well written/generated, of course.
orrrr you could go the other way and read explicitly ai-generated docs that use the code as source of truth https://deepwiki.com/gavrielc/nanoclaw
Cool idea but I just tried it out on one of my own repos and I couldn't get past the reCAPTCHA, maybe remove that.
(I'm a human btw)
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
Employees are humans and therefore subject to the law. There are remedies. And you can point a camera at the cash register.
Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?
3 replies →
You're taking it too literally.
The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.
Just as the risk of giving a poorly vetted employee unfettered access to the company vault.
In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.
Exactly as we do with employees.
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
Those usually didn't have keys to all your data. Worst case, you lost your server, and perhaps you hosted your emails there too? Very bad, but nothing compared to the access these clawdbot instances get.
1 reply →
> are running
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
> what if the current prices really are unsustainable and the thing goes 10x?
Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.
16 replies →
Seems much more likely the cost will go down 99%. With open source models and architectural innovations, something like Claude will run on a local machine for free.
1 reply →
I asked Gemini deep research to project when that will likely happen based on historical precedent. It guessed October 2027.
> what if the current prices really are unsustainable and the thing goes 10x?
What if a thermonuclear war breaks out? What's your backup plan for this scenario?
I genuinely can't tell which is more likely to happen in the next decade. If I have to guess I'll say war.
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
Even an OnlyMolts!!
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
This kind of automated task, if not properly optimized, is basically a waste-of-money garbage software. Any bug can cause it to loop until all the money is spent.
I installed it last night. Burned 7M tokens in 45 minutes. I don't even know how. There's no way to see what it's actually doing, as far as I can tell.
What was the task you asked it to do that it decided to do these?
I asked it to get its self set up and ready to be a helpful marketing assistant for a web based product. I'd intentionally kept it vague and told it to be proactive which was probably what caused it. Lesson learnt!
YOLO is a bit of an understatement for this
filing spam issues can easily get the account banned if it annoys the wrong maintainers.
In that case I'm glad they banned it, had no idea it was going to do something so stupid!
Did you give it your credit card?
Wouldn't a crypto wallet with a small amount deposited be smarter?
Nope, Kimi K2.5 is free on opencode at the moment, it was using that.
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
Yeah it didnt cost anything as its free right now, this was literally a test to see what the hype was about. All I'd asked it to do was get its self set up to be a helpful marketing assistant for a web-based product. No specifics or anything, it just decided to be 'helpful'.
> (Thankfully temporarily free on opencode)
they paid $0, it's all VC money printing for now
1 reply →
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
It's vastly different.
It's more (exactly?) like pulling a .sh file hosted on someone else's website and running it as root, except the contents of the file are generated by a LLM, no one reads them, and the owner of the website can change them without your knowledge.
> Is this materially different than giving all files on your system 777 permissions?
Yes, because I can't read or modify your files over the internet just because you chmod'ed them to 777. But with Clawdbot, I can!
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager. That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
[dead]
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
Thankfully the official Agent SDK Quickstart guide says that you can: https://platform.claude.com/docs/en/agent-sdk/quickstart
In particular, this bit:
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
Didn't Thariq make it clear three weeks ago when they shut down 3rd party tool access and the OpenCode users were upset?
> Third-party harnesses using Claude subscriptions create problems for users and are prohibited by our Terms of Service.
https://xcancel.com/trq212/status/2009689809875591565
1 reply →
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
I went down this rabbit hole a bit recently trying to use claude inside fence[0] and it seems that on macOS, claude stores this token inside Keychain. I'm not sure there's a way to expose that to a container... my guess would be no, especially since it seems the container is Linux, and also because keeping the Keychain out of reach of containers seems like it would be paramount. But someone might know better!
0: https://github.com/Use-Tusk/fence
3 replies →
Can you do everything via the SDK as via regular API calls? Caching etc all works? You can get reasoning, responses, tool call info, ... ?
Ha. 4 days later it no longer says that and that doesn't appear to be supported anymore. Now the SDK requires an API key.
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
Tons of chatter on Twitter making it sound like you'll get permabanned for doing this but... 1) how would they know if my requests are originating from Claude Code vs. OpenClaw? 2) how are we violating... anything? I'm working within my usage limits...
$70 or whatever to check if there's milk... just use your Claude Max subscription.
4 replies →
Was there a brouhaha with OpenClaw or was that with OpenCode?
2 replies →
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
> found it useful but running it scares
https://maordayanofficial.medium.com/the-sovereign-ai-securi...
What’s the difference between this, and just running Claude Code in —dangerously-skip-permissions mode in a container and accessing remotely via ssh?
I’m confused as to what these claw agents actually offer.
The README.md describes it as:
WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
Found the spec here: https://github.com/gavrielc/nanoclaw/blob/main/docs/SPEC.md
The scheduled tasks seem like the major functional difference. Pretty cool.
Has anyone tried Anthropic’s “Cowork”? How does that compare?
The "skills not features" contribution model is the most interesting part of this. Instead of a project that grows into another 52-module beast, contributors teach Claude how to transform the codebase per-user. It's basically contributing build instructions instead of build artifacts. If it actually works in practice, it's a genuinely novel approach to keeping small projects small.
Thanks! I believe that's where software is going. Just need Karpathy to give it a name so it can take off ;)
This violates the Claude Code subscription terms of service, so please be careful.
This project violates Claude Code's Terms of Service by automating Claude to create an unattended chatbot service that responds to third-party messaging platforms (WhatsApp, and what you add ...).
Interesting. Again, not a lawyer, but all of this is a bit murky and not sure it applies.
1. Usage is not automated and unattended - it only responds to messages that are sent to it with a specific prefix "Andy:"
2. This is not a bot service. It is not crawling twitter and responding to posts. Hard to see how sending it messages through WhatsApp is any different than through ssh via the terminal
3. I don't think a custom piece of software running on my computer that pipes data from a program into the Agents SDK is a third party "platform" integration.
How is this different from running Agents SDK as part of a CI process?
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
"much better isolation than containers"
If you've got an exploit for docker / linux containers, please share it with the class.
What I'm saying is that in practice, containers and VMs have both been quite secure.
Also, you can configure docker to run microvms too https://github.com/firecracker-microvm/firecracker-container...
1 reply →
[flagged]
What makes you think it's an AI comment?
1 reply →
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
Is this an official Anthropic project? Because that repo doesn't exist.
Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
Claude hallucinated that repo here in this commit https://github.com/gavrielc/nanoclaw/commit/dbf39a9484d9c66b...
I like that Claude's hypothesis was that Anthropic created openclaw and this anti-openclaw :)
> This is the anti-[OpenClaw](https://github.com/anthropics/openclaw).
Seems to be fixed now
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice. Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
Posthog is doing this now for project setup
Not sure if this is meant to be sarcastic but isn't Posthog patient zero of Sha1-Hulud 2.0?
1 reply →
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
https://github.com/gavrielc/nanoclaw/commit/22eb5258057b49a0... Is this inserting an advertisement into the agent prompt?
Great idea and name the danger here which I'll be interested to track is how do you keep this "nano"? Since it's built for you, you'll continue adding features i assume which over time will make this not very nano. I guess I'm wondering if there could be some small design tweaks of the repo that make this usable as a long term "fork the base and make it your own" concept
I will keep the source code as a minimal implementation that has the core capabilities that made Clawdbot/OpenClaw useful: chat with it via messaging app (only one channel included out of the box), memory (minimal implementation that leverages CLAUDE.md and the filesystem), cron jobs, browser.
If I want to add additional capabilities for myself, I'll contribute them to the project as skills for claude code to modify the code base, rather than directly to the source. I actually want to reduce the size of the base implementation and have a PR open to strip out 300-400 LOC
A personal implementation will always be "nano" compared to the full OpenClaw suite. As with literally everything, it's all relative.
To those who complain about these bots and the security concerns they raise, you basically have two options:
1. You can live in the future, and be at the bleeding edge of the latest AI tech, reaping the benefits. Be part of the solution.
2. You can stay in the past and get left behind, at the mercy of those who took the risks.
The 2. Thank you.
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
https://baileys.wiki/docs/intro/
I was using WAHA. It is an abstraction layer with a proper API on top. It supports many engines like Baileys and Whatsmeow (golang).
Unfortunately, all those solutions are shaky and could lead to a ban on your account.
https://waha.devlike.pro/
The singularity, but instead successive exponential improvement, its excessive exponential slop which passes the Turing test for programmers.
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
s/risk/guarantee (given sufficient time)/
Not seeing how the sandbox prevents anything really. The point of OpenClaw is to connect out to different systems.
Sure but at least it protects against unauthorized free-for-all access on your host system. If you want to explicitly give it access to external APIs over the internet that's a risk you personally are taking. It's really smart to run something like this in a sandbox, especially in the current beta/experimentation phase.
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!
Took me around ten minutes of finding a simple username that wasn't taken.
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
It uses a wrapper in places to consume MCPs as clis.
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
Where are those 500 lines of code?
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
i installed clawdbot twice but didn't really use it because i couldn't wrap my head around the skills and plugins, this looks so much more managable. and +1 for apple containers
def appreciate this more compact approach; everything is an experiment rn.
I realize you used Claude Agent SDK on purpose but I'd really like to this to be agent agnostic. Maybe I'll figure that out...
500 lines? Single files in that repo already have more than 500 lines.
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
can NanoClaw be used to participate in ClackerNews?
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on. Run it locally or use a cloud provider you can deeply trust.
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
If only there was some sort of thing that would help you build that for yourself.
lol, I might finally have to upgrade my Mac mini to Tahoe. Yofi.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]