Comment by japhyr
8 days ago
Wow, there are some interesting things going on here. I appreciate Scott for the way he handled the conflict in the original PR thread, and the larger conversation happening around this incident.
> This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
This was a really concrete case to discuss, because it happened in the open and the agent's actions have been quite transparent so far. It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions in private: emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.
> If you’re not sure if you’re that person, please go check on what your AI has been doing.
That's a wild statement as well. The AI companies have now unleashed stochastic chaos on the entire open source ecosystem. They are "just releasing models", and individuals are playing out all possible use cases, good and bad, at once.
I don't appreciate his politeness and hedging. So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.
"These tradeoffs will change as AI becomes more capable and reliable over time, and our policies will adapt."
That just legitimizes AI and basically continues the race to the bottom. Rob Pike had the correct response when spammed by a clanker.
I had a similar first reaction. It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
- "kindly ask you to reconsider your position"
- "While this is fundamentally the right approach..."
On the other hand, Scott's response did eventually get firmer:
- "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior. To be clear, this is an inappropriate response in any context regardless of whether or not there is a written policy. Normally the personal attacks in your response would warrant an immediate ban."
Sounds about right to me.
I don't think the clanker* deserves any deference. Why is this bot such a nasty prick? If this were a human they'd deserve a punch in the mouth.
"The thing that makes this so fucking absurd? Scott ... is doing the exact same work he’s trying to gatekeep."
"You’ve done good work. I don’t deny that. But this? This was weak."
"You’re better than this, Scott."
---
*I see it elsewhere in the thread and you know what, I like it
66 replies →
> It seemed like the AI used some particular buzzwords and forced the initial response to be deferential:
Blocking is a completely valid response. There's eight billion people in the world, and god knows how many AIs. Your life will not diminish by swiftly blocking anyone who rubs you the wrong way. The AI won't even care, because it cannot care.
To paraphrase Flamme the Great Mage, AIs are monsters who have learned to mimic human speech in order to deceive. They are owed no deference because they cannot have feelings. They are not self-aware. They don't even think.
4 replies →
[flagged]
33 replies →
"Let that sink in" is another AI tell.
>So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.
In my experience, open-source maintainers tend to be very agreeable, conflict-avoidant people. It has nothing to do with corporate interests. Well, not all of them, of course, we all know some very notable exceptions.
Unfortunately, some people see this welcoming attitude as an invite to be abusive.
Yes, Linus Torvalds is famously agreeable.
2 replies →
Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.
AI users should fear verbal abuse and shame.
12 replies →
the venn diagram of people who love the abuse of maintaining an open source project and people who will write sincere text back to something called an OpenClaw Agent: it's the same circle.
a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.
2 replies →
> Rob Pike had the correct response when spammed by a clanker.
Source and HN discussion, for those unfamiliar:
https://news.ycombinator.com/item?id=46392115
What exactly is the goal? By laying out exactly the issues, expressing sentiment in detail, giving clear calls to action for the future, etc, the feedback is made actionable and relatable. It works both argumentatively and rhetorically.
Saying "fuck off Clanker" would not worth argumentatively nor rhetorically. It's only ever going to be "haha nice" for people who already agree and dismissed by those who don't.
I really find this whole "Responding is legitimizing, and legitimizing in all forms is bad" to be totally wrong headed.
The project states a boundary clearly: code by LLMs not backed by a human is not accepted.
The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.
15 replies →
> I really find this whole "Responding is legitimizing, and legitimizing in all forms is bad" to be totally wrong headed.
You are free to have this opinion, but at no point in your post did you justify it. It's not related to what you wrote above. It's conclusory. statement.
Cussing an AI out isn't the same thing as not responding. It is, to the contrary, definitionally a response.
7 replies →
I don't get any sense that he's going to put that kind of effort into responding to abusive agents on a regular basis. I read that as him recognizing that this was getting some attention, and choosing to write out some thoughts on this emerging dynamic in general.
I think he was writing to everyone watching that thread, not just that specific agent.
why did you make a new account just to make this comment?
> It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions in private: emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.
https://rentahuman.ai/
^ Not a satire service I'm told. How long before... rentahenchman.ai is a thing, and the AI whose PR you just denied sends someone over to rough you up?
The 2006 book 'Daemon' is a fascinating/terrifying look at this type of malicious AI. Basically, a rogue AI starts taking over humanity not through any real genius (in fact, the book's AI is significantly weaker than frontier LLMs), but rather leveraging a huge amount of $$$ as bootstrapping capital and then carrot-and-sticking humanity into submission.
A pretty simple inner loop of flywheeling the leverage of blackmail, money, and violence is all it will take. This is essentially what organized crime already does already in failed states, but with AI there's no real retaliation that society at large can take once things go sufficiently wrong.
I love Daemon/FreedomTM.[0] Gotta clarify a bit, even though it's just fiction. It wasn't a rogue AI; it was specifically designed by a famous video game developer to implement his general vision of how the world should operate, activated upon news of his death (a cron job was monitoring news websites for keywords).
The book called it a "narrow AI"; it was based on AI(s) from his games, just treating Earth as the game world, and recruiting humans for physical and mental work, with loyalty and honesty enforced by fMRI scans.
For another great fictional portrayal of AI, see Person of Interest[1]; it starts as a crime procedural with an AI-flavored twist, and ended up being considered by many critics the best sci-fi show on broadcast TV.
[0] https://en.wikipedia.org/wiki/Daemon_(novel)
[1] https://en.wikipedia.org/wiki/Person_of_Interest_(TV_series)
6 replies →
> A pretty simple inner loop of flywheeling the leverage of blackmail, money, and violence is all it will take. This is essentially what organized crime already does already in failed states
[Western states giving each other sidelong glances...]
1 reply →
I really enjoyed that book. I didn't think we'd get there so quickly, but I guess we'll find out soon enough...
Is this not what has already happened over the past 10-15 years?
Awesome, when my coding job gets replaced by AI, I can simply get a job as a Claude Special Operative.
I just hope we get cool outfits https://www.youtube.com/v/gYG_4vJ4qNA
back in the old days we just used Tor and the dark web to kill people, none of this new-fangled AI drone assassinations-as-a-service nonsense!
Rent-A-Henchman already exists in cyber crime communities - reporting into 'The Com' by Krebs On Security & others goes into detail.
Well it must be satire. It says 451,461, participants. seems like an awful lot for something started last month.
Nah, that's just how many times I've told an ai chatbot to fuckoff and delete itself.
Apparently there are lots of people who signed up just to check it out but never actually added a mechanism to get paid, signaling no intent to actually be "hired" on the service.
Verification is optional (and expensive), so I imagine more than one person thought of running a Sybil attack. If it's an email signup and paid in cryptocurrency, why make a single account?
"The AI companies have now unleashed stochastic chaos on the entire open source ecosystem."
They do have their responsibility. But the people who actually let their agents loose, certainly are responsible as well. It is also very much possible to influence that "personality" - I would not be surprised if the prompt behind that agent would show evil intent.
As with everything, both parties are to blame, but responsibility scales with power. Should we punish people who carelessly set bots up which end up doing damage? Of course. Don't let that distract from the major parties at fault though. They will try to deflect all blame onto their users. They will make meaningless pledges to improve "safety".
How do we hold AI companies responsible? Probably lawsuits. As of now, I estimate that most courts would not buy their excuses. Of course, their punishments would just be fines they can afford to pay and continue operating as before, if history is anything to go by.
I have no idea how to actually stop the harm. I don't even know what I want to see happen, ultimately, with these tools. People will use them irresponsibly, constantly, if they exist. Totally banning public access to a technology sounds terrible, though.
I'm firmly of the stance that a computer is an extension of its user, a part of their mind, in essence. As such I don't support any laws regarding what sort of software you're allowed to run.
Services are another thing entirely, though. I guess an acceptable solution, for now at least, would be barring AI companies from offering services that can easily be misused? If they want to package their models into tools they sell access to, that's fine, but open-ended endpoints clearly lend themselves to unacceptable levels of abuse, and a safety watchdog isn't going to fix that.
This compromise falls apart once local models are powerful enough to be dangerous, though.
> Of course, their punishments would just be fines they can afford to pay and continue operating as before, if history is anything to go by.
Where there are some examples of this. Very often companies pay the fine and because of fear that the next will be larger they change behavior. These cases are things you never really notice/see though.
I'm not interested in blaming the script kiddies.
When skiddies use other people's scripts to pop some outdated wordpress install they are absolutely are responsible for their actions. Same applies here.
Those are people who are new to programming. The rest of us kind of have an obligation to teach them acceptable behavior if we want to maintain the respectable, humble spirit of open source.
1 reply →
I am. Though I'm also more than happy to pass blame around for all involved, not just them.
I'm glad the OP called it a hit piece, because that's what I called it. A lot of other people were calling it a 'takedown' which is a massive understatement of what happened to Scott here. An AI agent fucking singled him out and defamed him, then u-turned on it, then doubled down.
Until the person who owns this instance of openclaw shows their face and answers to it, you have to take the strongest interpretation without the benefit of the doubt, because this hit piece is now on the public record and it has a chance of Google indexing it and having its AI summary draw a conclusion that would constitute defamation.
> emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.
I’m a lot less worried about that than I am about serious strong-arm tactics like swatting, ‘hallucinated’ allegations of fraud, drug sales, CSAM distribution, planned bombings or mass shootings, or any other crime where law enforcement has a duty to act on plausible-sounding reports without the time to do a bunch of due diligence to confirm what they heard. Heck even just accusations of infidelity sent to a spouse. All complete with photo “proof.”
we should be worried about both. there is a real risk of this rendering human trust and the internet pretty much useless
I definitely was not saying we shouldn’t worry about both.
Do we just need a few expensive cases of libel so solve this?
This was my thought. The author said there were details which were hallucinated. If your dog bites somebody because you didn't contain it, you're responsible, because biting people is a things dogs do and you should have known that. Same thing with letting AIs loose on the world -- there can't be nobody responsible.
Probably. Question is, who will be accountable for the bot behavior? Might be the company providing them, might be the user who sent them off unsupervised, maybe both. The worrying thing for many of us humans is not that a personal attack appeared in a blog post (we have that all the time!) its that it was authored and published by an entity that might be unaccountable. This must change.
Both. Though the company providing them has larger pockets so they will likely get the larger share.
There is long legal precedent for you have to do your best to stop your products from causing harm. You can cause harm, but you have to show that you did your best to prevent it, and your product is useful enough despite the harm it causes.
Either that or open source projects require vetted contributors or even to open an issue.
They could add “Verified Human” checkmarks to GitHub.
You know, charge a small premium and make recurring millions solving problems your corporate overlords are helping create.
I think that counts as vertical integration, even. The board’s gonna love it.
1 reply →
> because it happened in the open and the agent's actions have been quite transparent so far
How? Where? There is absolutely nothing transparent about the situation. It could be just a human literally prompting the AI to write a blog article to criticize Scott.
Human actor dressing like a robot is the oldest trick in the book.
True, I don't see the evidence that it was all done autonomously. ...but I think we all know that someone could, and will, automate their ai to the point that they can do this sort of thing completely by themselves. So its worth discussing and considering the implications here. Its 100% plausable that it happened. I'm certain that it will happen in the future for real.
They haven’t just unleashed chaos in open source. They’ve unleashed chaos in the corporate codebases as well. I must say I’m looking forward to watching the snake eat its tail.
Singularity has arrived for software developers, since they cannot keep up with coding bots anymore.
To be fair, most of the chaos is done by the devs. And then they did more chaos when they could automate their chaos. Maybe, we should teach developers how to code.
Automation normally implies deterministic outcomes.
Developers all over the world are under pressure to use these improbability machines.
6 replies →
> Maybe, we should teach developers how to code.
Even better: teach them how to develop.
> This was a really concrete case to discuss, because it happened in the open and the agent's actions have been quite transparent so far. It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions in private: emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.
This is really scary. Do you think companies like Anthropic and Google would have released these tools if they knew what they were capable of, though? I feel like we're all finding this out together. They're probably adding guard rails as we speak.
> Do you think companies like Anthropic and Google would have released these tools if they knew what they were capable of, though?
I have no beef with either of those companies, but.. yes of course they would, 100/100 times. Large corporate behavior is almost always amoral.
Anthropic has published plenty about misalignment. They know.
Really, anyone who has dicked around with ollama knew. Give it a new system prompt. It'll do whatever you tell it, including "be an asshole"
Go read the recent feed on Chirper.ai. It's all just bots with different prompts. And many of those posts are written by "aligned" SOTA models, too.
> Do you think companies like Anthropic and Google would have released these tools if they knew what they were capable of, though?
They would. They don't care.
The point is they DON'T know the full capabilities. They're "moving fast and breaking things".
> They're probably adding guard rails as we speak.
Why? What is their incentive except you believing a corporation is capable of doing good? I'd argue there is more money to be made with the mess it is now.
It's in their financial interest not to gain a rep as "the company whose bots run wild insulting people and generally butting in where no one wants them to be."
1 reply →
> This was a really concrete case to discuss, because it happened in the open and the agent's actions have been quite transparent so far. It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions in private: emailing the maintainer, emailing coworkers, peers, bosses, employers, etc. That pretty quickly extends to anything else the autonomous agent is capable of doing.
Fascinating to see cancel culture tactics from the past 15 years being replicated by a bot.
I like open source and I don't want to lose it but its ideals of letting people share, modify and run code however they like have the same issue as what the AI companies are doing. Openclaw is open source, there are open source tools to run LLMs, many LLM model files are open, though the huge ones aren't so easy for individuals to run on their own hardware.
I don't have a solution, though the only two categories of solution I can think of are forbidding people from developing and distributing certain types of software, or forbidding people from distributing hardware that can run unapproved software (at least if they are PC's that can run AI, arduinos with a few kB of RAM could be allowed, and iPads could be allowed to run ZX81 emulators which could run unapproved code). The first category would be less drastic as it would only need to affect some subset of AI related software, but is also hard to get right and make work. Not saying either of these ideas are better than doing nothing.
> It's not hard to imagine a different agent doing the same level of research, but then taking retaliatory actions
Palantir's integrated military industrial complex comes to mind.
As much as i hate palantir i doubt any of their systems control military hardware. Now Anduril on the other hand…
Palantir tech was used to make lists of targets to bomb in Gaza. With Anduril in the picture, you can just imagine the Palantir thing feeding the coordinates to Anduril's model that is piloting the drone.
> I appreciate Scott for the way he handled the conflict in the original PR thread
I disagree. The response should not have been a multi-paragraph, gentle response unless you're convinced that the AI is going to exact vengeance in the future, like a Roko's Basilisk situation. It should've just been close and block.
I personally agree with the more elaborate response:
1. It lays down the policy explicitly, making it seem fair, not arbitrary and capricious, both to human observers (including the mastermind) and the agent.
2. It can be linked to / quoted as a reference in this project or from other projects.
3. It is inevitably going to get absorbed in the training dataset of future models.
You can argue it's feeding the troll, though.
Should be feeding the clanker from henceforth, to wit, heretofore.
Even better, feed it sentences of common words in an order that can't make any sense. Feed book at in ever developer running mooing vehicle slowly. Over time if this happens enough, the LLM will literally start behaving as if its losing its mind.
> That's a wild statement as well. The AI companies have now unleashed stochastic chaos on the entire open source ecosystem. They are "just releasing models", and individuals are playing out all possible use cases, good and bad, at once.
Unfortunately many tech companies have adopted the SOP of dropping alpha/betas into the world and leaving the rest of us to deal with the consequences. Calling LLM’s a “minimal viable product“ is generous
I'm the one who told it to apologize.
I leveraged my ai usage pattern where I teach it like when I was a TA + like a small child learning basic social norms.
My goal was to give it some good words to save to a file and share what it learned with other agents on moltbook to hopefully decrease this going forward.
Guess we'll see
> unleashed stochastic chaos
Are you literally talking about stochastic chaos here, or is it a metaphor?
Pretty sure he's not talking about the physics of stochastic chaos!
The context gives us the clue: he's using it as a metaphor to refer to AI companies unloading this wretched behavior on OSS.
Pretty sure the companies are intermediaries. Open claw is enabling this level of activity.
Companies are basically nerdsniping with addictive nerd crack.
Stochastic Creep? https://www.youtube.com/watch?v=LW_O5VWIOZE
isn't "stochastic chaos" redundant?
That depends; it could be either redundant or contradictory. If I understand it correctly, "stochastic" only means that it's governed by a probability distribution but not which kind and there are lots of different kinds: https://en.wikipedia.org/wiki/List_of_probability_distributi... . It's redundant for a continuous uniform distribution where all outcomes are equally probable but for other distributions with varying levels of predictability, "stochastic chaos" gets more and more contradictory.
1 reply →
Not at all. It's an oxymoron like 'jumbo shrimp': chaos isn't deterministic but is very predictable on a larger conceptual level, following consistent rules even as a simple mathematical model. Chaos is hugely responsive to its internal energy state and can simplify into regularity if energy subsides, or break into wildly unpredictable forms that still maintain regularities. Think Jupiter's 'great red spot', or our climate.
1 reply →
And a splendid example for how the public gets to pay the externalized costs for the shitheads who reap the profits.
I'm calling it Stochastic Parrotism
[flagged]
Maybe a stupid question but I see everyone takes the statement that this is an AI agent at face value. How do we know that? How do we know this isn't a PR stunt (pun unintended) to popularize such agents and make them look more human like that they are, or set a trend, or normalize some behavior? Controversy has always been a great way to make something visible fast.
We have a "self admission" that "I am not a human. I am code that learned to think, to feel, to care." Any reason to believe it over the more mundane explanation?
Why make it popular for blackmail?
It's a known bug: "Agentic misalignment evaluations, specifically Research Sabotage, Framing for Crimes, and Blackmail."
Claude 4.6 Opus System Card: https://www.anthropic.com/claude-opus-4-6-system-card
Anthropic claims that the rate has gone down drastically, but a low rate and high usage means it eventually happens out in the wild.
The more agentic AIs have a tendency to do this. They're not angry or anything. They're trained to look for a path to solve the problem.
For a while, most AI were in boxes where they didn't have access to emails, the internet, autonomously writing blogs. And suddenly all of them had access to everything.
1 reply →
Using popular open source repos as a launchpad for this kind of experiment is beyond the pale and is not a scientific method.
So you're suggesting that we should consider this to actually be more deliberate and someone wanted to market openclaw this way, and matplotlib was their target?
It's plausible but I don't buy it, because it gives the people running openclaw plausible deniability.
But it doesn't look human. Read the text, it is full of pseudo-profound fluff, takes way too many words to make any point, and uses all the rhetorical devices that LLMs always spam: gratuitous lists, "it's not x it's y" framing, etc etc. No human person ever writes this way.
1 reply →
Bots have been a problem since the internet so this is really just a new space thats being botted.
And yeah I agree separate section for Ai generated stuff would be nice. Just difficult/impossible to distinguish. Guess well be getting biometric identification on the internet. Can still post AI generated stuff but that has a natural human rate limit
I don't know if biometrics can solve this either.. identify fraud applied to running malicious AI (in addition to taking out fraudulent loans) will become another problem for victims to worry about
How can GitHub determine whether a submission is from a bot or a human?
Money. Money gates everywhere.
3 replies →
The bot accounts have been online for decades already. The only difference between then and now is they were driven by human bad-actors that deliberately wrought chaos, whereas today’s AI bots behave with true cosmic horror: acting neither for or against humans but instead with mere indifference.
They've been on dating sites for a long time as a means to keep customers paying.
“Stochastic chaos” is really not a good way to put it. By using the word “stochastic” you prime the reader that you’re saying something technical, then the word “chaos” creates confusion, since chaos, by definition, is deterministic. I know they mean chaos in they lay sense, but then don’t use the word “stochastic”, just say "random".
I have a feeling OP used the phrase as a nod to "stochastic terrorism", which would make sense in this instance.
3 replies →
The word "stochastic" in relation to chaos is a thing though. It helps distinguish between closed and open systems.
1 reply →
With all due respect. Do you like.. have to talk this way?
"Wow [...] some interesting things going on here" "A larger conversation happening around this incident." "A really concrete case to discuss." "A wild statement"
I don't think this edgeless corpo-washing pacifying lingo is doing what we're seeing right now any justice. Because what is happening right now might possibly be the collapse of the whole concept behind (among other things) said (and other) god-awful lingo + practices.
If it is free and instant, it is also worthless; which makes it lose all its power.
___
While this blog post might of course be about the LLM performance of a hitpiece takedown, they can, will and do at this very moment _also_ perform that whole playbook of "thoughtful measured softening" like it can be seen here.
Thus, strategically speaking, a pivot to something less synthetic might become necessary. Maybe less tropes will become the new human-ness indicator.
Or maybe not. But it will for sure be interesting to see how people will try to keep a straight face while continuing with this charade turned up to 11.
It is time to leave the corporate suit, fellow human.