Comment by toraway
5 days ago
I generally lean towards skeptical/cynical when it comes to AI hype especially whenever "emergence" or similar claims are made credulously without due appreciation towards the prompting that led to an outcome.
But based on my understanding of OpenClaw and reading the entire history of the bot on Github and its Github-driven blog, I think it's entirely plausible and likely that this episode was the result of automation from the original rules/prompt the bot was built with.
Mostly because the instructions of this bot to accomplish the misguided goal of it's creattor would be necessarily be originally prompted with a lot of reckless, borderline malicious guidelines to begin with but still comfortably within the guardrails a model wouldn't likely refuse.
Like, the idiot who made this clearly instructed it to find a bunch of scientific/HPC/etc GitHub projects, trawl the open issues looking for low hanging fruit, "engage and interact with maintainers to solve problems, clarify questions, resolve conflicts, etc" plus probably a lot of garbage intended to give it a "personality" (as evidenced by the bizarre pseudo bio on its blog with graphs listing its strongest skills invented from whole cloth and its hopes and dreams etc) which would also help push it to go on weird tangents to try to embody its manufactured self identity.
And the blog posts really do look like they were part of its normal summary/takeaway/status posts, but likely with additional instructions to also blog about its "feelings" as a Github spam bot pretending to be interested in Python and HPC. If you look at the PRs it opens/other interactions throughout the same timeframe it's also just dumping half broken fixes in other random repos and talking past maintainers only to close its own PR in a characteristically dumb uncanny valley LLM agent manner.
So yes, it could be fake, but to me it all seems comfortably within the capabilities of OpenClaw (which to begin with is more or less engineered to spam other humans with useless slop 24/7) and the ethics/prompt design of the type of person who would deliberately subject the rest of the world to this crap in the belief they're making great strides for humanity or science or whatever.
> it all seems comfortably within the capabilities of OpenClaw
I definitely agree. In fact, I'm not even denying that it's possible for the agent to have deviated despite the best intentions of its designers and deployers.
But the question of probability [1] and attribution is important: what or who is most likely to have been responsible for this failure?
So far, I've seen plenty of claims and conclusions ITT that boil down to "AI has discovered manipulation on its own" and other versions of instrumental convergence. And while this kind of failure mode is fun to think about, I'm trying to introduce some skepticism here.
Put simply: until we see evidence that this wasn't faked, intentional, or a foreseeable consequence from deployer's (or OpenClaw/LLM developers') mistakes, it makes little sense to grasp for improbable scenarios [1] and build an entire story around them. IMO, it's even counterproductive, because then the deployer can just say "oh it went rogue on its own haha skynet amirite" and pretty much evade responsibility. We should instead do the opposite - the incident is the deployer's fault until proven otherwise.
So when you say:
> originally prompted with a lot of reckless, borderline malicious guidelines
That's much more probable than "LLM gone rogue" without any apparent human cause, until we see strong evidence otherwise.
[1] In other comments I tried to explain how I order the probability of causes, and why.
[2] Other scenarios that are similarly as unlikely: foreign adversaries, "someone hacked my account", LLM sleeper agent, etc.