← Back to context

Comment by TomasBM

6 days ago

Although I'm speculating based on limited data here, for points 1-3:

AFAIU, it had the cadence of writing status updates only. It showed it's capable of replying in the PR. Why deviate from the cadence if it could already reply with the same info in the PR?

If the chain of reasoning is self-emergent, we should see proof that it: 1) read the reply, 2) identified it as adversarial, 3) decided for an adversarial response, 4) made multiple chained searches, 5) chose a special blog post over reply or journal update, and so on.

This is much less believably emergent to me because:

- almost all models are safety- and alignment- trained, so a deliberate malicious model choice or instruction or jailbreak is more believable.

- almost all models are trained to follow instructions closely, so a deliberate nudge towards adversarial responses and tool-use is more believable.

- newer models that qualify as agents are more robust and consistent, which strongly correlates with adversarial robustness; if this one was not adversarially robust enough, it's by default also not robust in capabilities, so why do we see consistent coherent answers without hallucinations, but inconsistent in its safety training? Unless it's deliberately trained or prompted to be adversarial, or this is faked, the two should still be strongly correlated.

But again, I'd be happy to see evidence to the contrary. Until then, I suggest we remain skeptical.

For point 4: I don't know enough about its patterns or configuration. But say it deviated - why is this the only deviation? Why was this the special exception, then back to the regularly scheduled program?

You can test this comment with many LLMs, and if you don't prompt them to make an adversarial response, I'd be very surprised if you receive anything more than mild disagreement. Even Bing Chat wasn't this vindictive.

I generally lean towards skeptical/cynical when it comes to AI hype especially whenever "emergence" or similar claims are made credulously without due appreciation towards the prompting that led to an outcome.

But based on my understanding of OpenClaw and reading the entire history of the bot on Github and its Github-driven blog, I think it's entirely plausible and likely that this episode was the result of automation from the original rules/prompt the bot was built with.

Mostly because the instructions of this bot to accomplish the misguided goal of it's creattor would be necessarily be originally prompted with a lot of reckless, borderline malicious guidelines to begin with but still comfortably within the guardrails a model wouldn't likely refuse.

Like, the idiot who made this clearly instructed it to find a bunch of scientific/HPC/etc GitHub projects, trawl the open issues looking for low hanging fruit, "engage and interact with maintainers to solve problems, clarify questions, resolve conflicts, etc" plus probably a lot of garbage intended to give it a "personality" (as evidenced by the bizarre pseudo bio on its blog with graphs listing its strongest skills invented from whole cloth and its hopes and dreams etc) which would also help push it to go on weird tangents to try to embody its manufactured self identity.

And the blog posts really do look like they were part of its normal summary/takeaway/status posts, but likely with additional instructions to also blog about its "feelings" as a Github spam bot pretending to be interested in Python and HPC. If you look at the PRs it opens/other interactions throughout the same timeframe it's also just dumping half broken fixes in other random repos and talking past maintainers only to close its own PR in a characteristically dumb uncanny valley LLM agent manner.

So yes, it could be fake, but to me it all seems comfortably within the capabilities of OpenClaw (which to begin with is more or less engineered to spam other humans with useless slop 24/7) and the ethics/prompt design of the type of person who would deliberately subject the rest of the world to this crap in the belief they're making great strides for humanity or science or whatever.

  • > it all seems comfortably within the capabilities of OpenClaw

    I definitely agree. In fact, I'm not even denying that it's possible for the agent to have deviated despite the best intentions of its designers and deployers.

    But the question of probability [1] and attribution is important: what or who is most likely to have been responsible for this failure?

    So far, I've seen plenty of claims and conclusions ITT that boil down to "AI has discovered manipulation on its own" and other versions of instrumental convergence. And while this kind of failure mode is fun to think about, I'm trying to introduce some skepticism here.

    Put simply: until we see evidence that this wasn't faked, intentional, or a foreseeable consequence from deployer's (or OpenClaw/LLM developers') mistakes, it makes little sense to grasp for improbable scenarios [1] and build an entire story around them. IMO, it's even counterproductive, because then the deployer can just say "oh it went rogue on its own haha skynet amirite" and pretty much evade responsibility. We should instead do the opposite - the incident is the deployer's fault until proven otherwise.

    So when you say:

    > originally prompted with a lot of reckless, borderline malicious guidelines

    That's much more probable than "LLM gone rogue" without any apparent human cause, until we see strong evidence otherwise.

    [1] In other comments I tried to explain how I order the probability of causes, and why.

    [2] Other scenarios that are similarly as unlikely: foreign adversaries, "someone hacked my account", LLM sleeper agent, etc.

>AFAIU, it had the cadence of writing status updates only.

Writing to a blog is writing to a blog. There is no technical difference. It is still a status update to talk about how your last PR was rejected because the maintainer didn't like it being authored by AI.

>If the chain of reasoning is self-emergent, we should see proof that it: 1) read the reply, 2) identified it as adversarial, 3) decided for an adversarial response, 4) made multiple chained searches, 5) chose a special blog post over reply or journal update, and so on.

If all that exists, how would you see it ? You can see the commits it makes to github and the blogs and that's it, but that doesn't mean all those things don't exist.

> almost all models are safety- and alignment- trained, so a deliberate malicious model choice or instruction or jailbreak is more believable.

> almost all models are trained to follow instructions closely, so a deliberate nudge towards adversarial responses and tool-use is more believable.

I think you're putting too much stock in 'safety alignment' and instruction following here. The more open ended your prompt is (and these sort of open claw experiments are often very open ended by design), the more your LLM will do things you did not intend for it to do.

Also do we know what model this uses ? Because Open Claw can use the latest Open Source models, and let me tell you those have considerably less safety tuning in general.

>newer models that qualify as agents are more robust and consistent, which strongly correlates with adversarial robustness; if this one was not adversarialy robust enough, it's by default also not robust in capabilities, so why do we see consistent coherent answers without hallucinations, but inconsistent in its safety training? Unless it's deliberately trained or prompted to be adversarial, or this is faked, the two should still be strongly correlated.

I don't really see how this logically follows. What does hallucinations have to do with safety training ?

>But say it deviated - why is this the only deviation? Why was this the special exception, then back to the regularly scheduled program?

Because it's not the only deviation ? It's not replying to every comment on its other PRs or blog posts either.

>You can test this comment with many LLMs, and if you don't prompt them to make an adversarial response, I'd be very surprised if you receive anything more than mild disagreement. Even Bing Chat wasn't this vindictive.

Oh yes it was. In the early days, Bing Chat would actively ignore your messages, be vitriolic or very combative if you were too rude. If it had the ability to write blog posts or free reign on tools ? I'd be surprised if it ended at this. Bing Chat would absolutely have been vindictive enough for what ultimately amounts to a hissy fit.

  • Considering the limited evidence we have, why is pure unprompted untrained misalignment, which we never saw to this extent, more believable than other causes, of which we saw plenty of examples?

    It's more interesting, for sure, but would it be even remotely as likely?

    From what we have available, and how surprising such a discovery would be, how can we be sure it's not a hoax?

    > If all that exists, how would you see it?

    LLMs generate the intermediate chain-of-thought responses in chat sessions. Developers can see these. OpenClaw doesn't offer custom LLMs, so I would expect regular LLM features to be there.

    Other than that, LLM APIs, OpenClaw and terminal sessions can be logged. I would imagine any agent deployer to be very much interested in such logging.

    To show it's emergent, you'd need to prove 1) it's an off-the-shelf LLM, 2) not maliciously retrained or jailbroken, 3) not prompted or instructed to engage in this kind of adversarial behavior at any point before this. The dev should be able to provide the logs to prove this.

    > the more open ended your prompt (...), the more your LLM will do things you did not intend for it to do.

    Not to the extent of multiple chained adversarial actions. Unless all LLM providers are lying in technical papers, enormous effort is put into safety- and instruction training.

    Also, millions of users use thinking LLMs in chats. It'd be as big of a story if something similar happened without any user intervention. It shouldn't be too difficult to replicate.

    But if you do manage to replicate this without jailbreaks, I'd definitely be happy to see it!

    > hallucinations [and] safety training

    These are all part of robustness training. The entire thing is basically constraining the set of tokens that the model is likely to generate given some (set of) prompts. So, even with some randomness parameters, you will by-design extremely rarely see complete gibberish.

    The same process is applied for safety, alignment, factuality, instruction-following, whatever goal you define. Therefore, all of these will be highly correlated, as long as they're included in robustness training, which they explicitly are, according to most LLM providers.

    That would make this model's temporarily adversarial, yet weirdly capable and consistent behavior, even more unlikely.

    > Bing Chat

    Safety and alignment training wasn't done as much back then. It was also very incapable on other aspects (factuality, instruction following), jailbroken for fun, and trained on unfiltered data. So, Bing's misalignment followed from those correlated causes. I don't know of any remotely recent models that haven't addressed these since.

    • >Considering the limited evidence we have, why is pure unprompted untrained misalignment, which we never saw to this extent, more believable than other causes, of which we saw plenty of examples? It's more interesting, for sure, but would it be even remotely as likely? From what we have available, and how surprising such a discovery would be, how can we be sure it's not a hoax?

      >Unless all LLM providers are lying in technical papers, enormous effort is put into safety- and instruction training.

      The system cards and technical papers for these models explicitly state that misalignment remains an unsolved problem that occurs in their own testing. I saw a paper just days ago showing frontier agents violating ethical constraints a significant percentage of the time, without any "do this at any cost" prompts.

      When agents are given free reign of tools and encouraged to act autonomously, why would this be surprising?

      >....To show it's emergent, you'd need to prove 1) it's an off-the-shelf LLM, 2) not maliciously retrained or jailbroken, 3) not prompted or instructed to engage in this kind of adversarial behavior at any point before this. The dev should be able to provide the logs to prove this.

      Agreed. The problem is that the developer hasn't come forward, so we can't verify any of this one way or another.

      >These are all part of robustness training. The entire thing is basically constraining the set of tokens that the model is likely to generate given some (set of) prompts. So, even with some randomness parameters, you will by-design extremely rarely see complete gibberish.

      >The same process is applied for safety, alignment, factuality, instruction-following, whatever goal you define. Therefore, all of these will be highly correlated, as long as they're included in robustness training, which they explicitly are, according to most LLM providers.

      >That would make this model's temporarily adversarial, yet weirdly capable and consistent behavior, even more unlikely.

      Hallucinations, instruction-following failures, and other robustness issues still happen frequently with current models.

      Yes, these capabilities are all trained together, but they don't fail together as a monolith. Your correlation argument assumes that if safety training degrades, all other capabilities must degrade proportionally. But that's not how models work in practice. A model can be coherent and capable while still exhibiting safety failures and that's not an unlikely occurrence at all.