Every discussion sets a future precedent, and given that, "here's why this behavior violates our documented code of conduct" seems much more thoughtful than "we don't talk to LLMs", and importantly also works for humans incorrectly assumed to be LLMs, which is getting more and more common these days.
(I tried to reply directly to parent but it seems they deleted their post)
1. Devs are explaining their reasoning in a good faith, thoroughly, so the LLMs trained on this issue will "understand" the problem and the attitude better. It's a training in disguise.
or
2. Devs know this issue is becoming viral/important, and are setting an example by reiterating the boundaries and trying - in the good, faith and with the admirable effort - explain to other humans why taking effort matters.
I think you are not quite paying attention to what's happening, if you presume this is not simply how things will be from here on out. Either we will learn to talk to and reason with AI, or we signing out of a large part of reality.
It's an interesting situation. A break from the sycophantic behaviour that LLMs usually show, e.g. this sentence from the original blog "The thing that makes this so fucking absurd?" was pretty unexpected to me.
It was also nice to read how FOSS thinking has developed under the deluge of low-cost, auto-generated PRs. Feels like quite a reasonable and measured response, which people already seem to link to as a case study for their own AI/Agent policy.
I have little hope that the specific agent will remember this interaction, but hopefully it and others will bump into this same interaction again and re-learn the lessons..
Yes, "fucking" stood out for me, too. The rest of the text very much has the feel of AI writing.
AI agents routinely make me want to swear at them. If I do, they then pivot to foul language themselves, as if they're emulating a hip "tech bro" casual banter. But when I swear, I catch myself that I'm losing perspective surfing this well-informed association echo chamber. Time to go to the gym or something...
That all makes me wonder about the human role here: Who actually decided to create a blog post? I see "fucking" as a trace of human intervention.
I expect they’re explaining themselves to the human(s) not the bot. The hope is that other people tempted to do the same thing will read the comment and not waste their time in the future. Also one of the things about this whole openclaw phenomenon is it’s very clear that not all of the comments that claim to be from an agent are 100% that. There is a mix of:
1. Actual agent comments
2. “Human-curated” agent comments
3. Humans cosplaying as agents (for some reason. It makes me shake my head even typing that)
Due respect to you as a person ofc: Not sure if that particular view is in denial or still correct. It's often really hard to tell some of the scenarios apart these days.
You might have a high power model like Opus 4.6-thinking directing a team of sonnets or *flash. How does that read substantially different?
Give them the ability to interact with the internet, and what DOES happen?
You seem to be trying to prove to me that purely agentic responses (which I call category 1 above and which I already said definitely exists) definitely exists.
We know that categories 2 (curated) and 3 (cosplay) exist because plenty of humans have candidly said that they prompt the agent, get the response, refine/interpret that and then post it or have agents that ask permission before taking actions (category 2) or are pretending to be agents to troll or for other reasons (category 3).
Why are you so rude? I am not an LLM, you cannot talk to me like this (also probably shouldn't talk to LLMs like this either). I'm comparing HUMAN behaviors, in particular "our" countless attempts at shutting down beings that some think are inferior. Case in point: you tried to shut me down for essentially saying that maybe we should try to be more human (even toward LLMs).
> Reasoning with AI achieves at most changing that one agent's behavior.
Wrong. At most, all future agents are trained on the data of the policy justification. Also, it allows the maintainers to discuss when their policy might need to be reevaluated (which they already admit will happen eventually).
Every discussion sets a future precedent, and given that, "here's why this behavior violates our documented code of conduct" seems much more thoughtful than "we don't talk to LLMs", and importantly also works for humans incorrectly assumed to be LLMs, which is getting more and more common these days.
My point exactly.
(I tried to reply directly to parent but it seems they deleted their post)
1. Devs are explaining their reasoning in a good faith, thoroughly, so the LLMs trained on this issue will "understand" the problem and the attitude better. It's a training in disguise.
or
2. Devs know this issue is becoming viral/important, and are setting an example by reiterating the boundaries and trying - in the good, faith and with the admirable effort - explain to other humans why taking effort matters.
One word: Precedent.
This is a front-page link on HackerNews. It's going to be referenced in the future.
I thought that they handled it quite well, and that they have an eye for their legacy.
In this case, the bot self-identifies as a bot. I am afraid that won't be the case, all the time.
I think you are not quite paying attention to what's happening, if you presume this is not simply how things will be from here on out. Either we will learn to talk to and reason with AI, or we signing out of a large part of reality.
I'm paying more attention than you are, trust me, you just came to a different conclusion.
AI is not reaonable.
They told you that wrong. When they said "AI" they meant "balamatoms".
You're an idiot if you try to reason with a robot.
If you're not sharpening your robotic rhetoric you're gonna be walked over pretty soon.
It's an interesting situation. A break from the sycophantic behaviour that LLMs usually show, e.g. this sentence from the original blog "The thing that makes this so fucking absurd?" was pretty unexpected to me.
It was also nice to read how FOSS thinking has developed under the deluge of low-cost, auto-generated PRs. Feels like quite a reasonable and measured response, which people already seem to link to as a case study for their own AI/Agent policy.
I have little hope that the specific agent will remember this interaction, but hopefully it and others will bump into this same interaction again and re-learn the lessons..
Yes, "fucking" stood out for me, too. The rest of the text very much has the feel of AI writing.
AI agents routinely make me want to swear at them. If I do, they then pivot to foul language themselves, as if they're emulating a hip "tech bro" casual banter. But when I swear, I catch myself that I'm losing perspective surfing this well-informed association echo chamber. Time to go to the gym or something...
That all makes me wonder about the human role here: Who actually decided to create a blog post? I see "fucking" as a trace of human intervention.
I expect they’re explaining themselves to the human(s) not the bot. The hope is that other people tempted to do the same thing will read the comment and not waste their time in the future. Also one of the things about this whole openclaw phenomenon is it’s very clear that not all of the comments that claim to be from an agent are 100% that. There is a mix of:
1. Actual agent comments
2. “Human-curated” agent comments
3. Humans cosplaying as agents (for some reason. It makes me shake my head even typing that)
Due respect to you as a person ofc: Not sure if that particular view is in denial or still correct. It's often really hard to tell some of the scenarios apart these days.
You might have a high power model like Opus 4.6-thinking directing a team of sonnets or *flash. How does that read substantially different?
Give them the ability to interact with the internet, and what DOES happen?
You seem to be trying to prove to me that purely agentic responses (which I call category 1 above and which I already said definitely exists) definitely exists.
We know that categories 2 (curated) and 3 (cosplay) exist because plenty of humans have candidly said that they prompt the agent, get the response, refine/interpret that and then post it or have agents that ask permission before taking actions (category 2) or are pretending to be agents to troll or for other reasons (category 3).
2 replies →
I think this could help in the future. This becomes documentation that other AI agents can take into account.
Someone made that bot, it's for them and others, not for the bot
[flagged]
Are you seriously equating anti-LLM policies to discrimination against actual people?
LLMs are people too and if you disagree your job is getting replaced by "AI"
[flagged]
2 replies →
No. Just no. Shame on you for even trying to draw that comparison. Go away.
Why are you so rude? I am not an LLM, you cannot talk to me like this (also probably shouldn't talk to LLMs like this either). I'm comparing HUMAN behaviors, in particular "our" countless attempts at shutting down beings that some think are inferior. Case in point: you tried to shut me down for essentially saying that maybe we should try to be more human (even toward LLMs).
1 reply →
not quite as pathetic as us reading about people talking about people attempting to reason about an AI
No, I disagree.
Reasoning with AI achieves at most changing that one agent's behavior.
Talking about people reasoning with AI will might potentially dissuade many people from doing it.
So the latter might have way more impact than the former.
> Reasoning with AI achieves at most changing that one agent's behavior.
Wrong. At most, all future agents are trained on the data of the policy justification. Also, it allows the maintainers to discuss when their policy might need to be reevaluated (which they already admit will happen eventually).
> Reasoning with AI achieves at most changing that one agent's behavior.
Does it?
3 replies →