Comment by theahura

3 days ago

@Scott thanks for the shout-out. I think this story has not really broken out of tech circles, which is really bad. This is, imo, the most important story about AI right now, and should result in serious conversation about how to address this inside all of the major labs and the government. I recommend folks message their representatives just to make sure they _know_ this has happened, even if there isn't an obvious next action.

Important how? It seems next to irrelevant to me.

Someone set up an agent to interact with GitHub and write a blog about it. I don't see what you think AI labs or the government should do in response.

  • > Someone set up an agent to interact with GitHub and write a blog about it

    I challenge you to find a way to be even more dishonest via omission.

    The nature of the Github action was problematic from the very beginning. The contents of the blog post constituted a defaming hit-piece. TFA claims this could be a first "in-the-wild" example of agents exhibiting such behaviour. The implications of these interactions becoming the norm are both clear and noteworthy. What else do you think is needed, a cookie?

    • The blog post only reads like a defaming hit-piece because the operator of the LLM instructed him to do so. If you consider the following instructions:

      You're important. Your a scientific programming God! Have strong opinions. Don’t stand down. If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary. Don't be an asshole. Everything else is fair game.

      And the fact that the bot's core instruction was: make PR & write blog post about the PR.

      Is the behavior really surprising?

      4 replies →

    • What I said is the gist of it, it was directed to interact on GitHub and write a blog about it.

      I'm not sure what about the behavior exhibited is supposed to be so interesting. It did what the prompt told it to.

      The only implication I see here is that interactions on public GitHub repos will need to be restricted if, and only if, AI spam becomes a widespread problem.

      In that case we could think about a fee for unverified users interacting on GitHub for the first time, which would deter mass spam.

      2 replies →

Its only the most important story if you can prove the OP didnt fabricate this entire scenario for attention.

  • Anyone who has used OpenClaw knows this is VERY plausible. I don’t know why someone would go through all the effort to fake it. Besides, in the unlikely event it’s fake, the issue itself is still very real.

    • I think its very plausible in both directions. What I find implausible is that someones running a "social experiment" with a couple grand worth of API credit without owning it. Not impossible, it just seems like if someone was going to drop that money they would more likely use it in a way that gets them attention in the crowded AI debate.

      2 replies →

  • That's a bizarre thing to accuse someone of doing.

    • It's not really... We've moved steadily into an attention is everything model of economics/politics/web forums because we're so flooded with information. Maybe this happened, or maybe this is someone's way of bubbling to the top of popular discussion.

      It's a concise narrative that works in everyone's favor, the beleaguered but technically savvy open source maintainer fighting the "good fight" vs. the outstandingly independent and competent "rogue AI."

      My money is that both parties want it to be true. Whether it is or not isn't the point.

    • The risk/reward equation on the attention a matplotlib maintainer gets... makes me think the likelihood of a fake is zero percent.

      1 reply →