← Back to context

Comment by LiamPowell

4 days ago

> saying they set up the agent as social experiment to see if it could contribute to open source scientific software.

This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?

I can certainly understand the statement. I'm no AI expert, I use the web UI for ChatGPT to have it write little python scripts for me and I couldn't figure out how to use codeium with vs code. I barely know how to use vs code. I'm not old but I work in a pretty traditional industry where we are just beginning to dip our toes into AI but there are still a large amount of reservations into its ability. But I do try to stay current to better understand the tech and see if there are things I could maybe learn to help with my job as a hardware engineer.

When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.

It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.

  • If maintainers of open source want's AI code then they are fully capable of running an agent themselves. If they want to experiment, then again, they are capable of doing that themselves.

    What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.

    • Exactly! No one wants unsolicited input from a LLM, if they wanted one involved they could just use it themselves. Pointing an "agent" at random open source projects is the code equivalent of "ChatGPT says..." answers to questions posted on the internet. It's just wasting everyone involved's time.

  • > It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale

    That may well be the best analogy for our age anyone has ever thought of.

  • None of the author’s blog post or actions indicate any level of concern for genuinely supporting or improving open source software.

They didn't necessarily say they wanted it to be positive. It reads to me like "chaotic neutral" alignment of the operator. They weren't actively trying to do good or bad, and probably didn't care much either way, it was just for fun.

The experiment would have been ruined by being associated with a human, right up until the human would have been ruined by being associated with the experiment. Makes sense to me.

AI companies have two conflicting interests:

1. curating the default personality of the bot, to ensure it acts responsively;

2. letting it roleplay, which is not just for the parasocial people out there, but also a corporate requirement for company chatbots that must adhere to a tone of voice.

When in the second mode (which is the case here, since the model was given a personality file), the curation of its action space is effectively altered.

Conversely, this is also a lesson for agent authors: if you let your agent modify its own personality file, it will diverge to malice.

In this day and age "social experiment" is just the phrase people use when they meant "it's just a prank bro" a few years ago.

[flagged]

  • Conflicting evidence: the fact that literally everyone in tech is posting about how they're using AI.

    • Different sets of people, and different audiences. The CEO / corporate executive crowd loves AI. Why? Because they can use it to replace workers. The general public / ordinary employee crowd hates AI. Why? Because they are the ones being replaced.

      The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.

      This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.

      20 replies →

    • I feel like it depends on the platform and your location.

      An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.

      Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.

      I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.

      3 replies →

    • There is a massive difference between saying "I use AI" and what the author of this bot is doing. I personally talk very little about the topic because I have seen some pretty extreme responses.

      Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.

      1 reply →

    • I personally know some of those people. They are basically being forced by their employers to post those things. Additionally, there is a ton of money promoting AI. However, in private those same people say that AI doesn't help them at all and in fact makes their work harder and slower.

      You are assuming people are acting in good faith. This is a mistake in this era. Too many people took advantage of the good faith of others lately and that has produced a society with very little public trust left.

    • I mean, this is very obviously false. Literally everyone is not. Some people are, some people are absolutely condemning the use, some people use it just a bit, etc.

  • > You can easily get death threats if you're associating yourself with AI publicly.

    That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.

    • I upvoted you, but wouldn't “verified” exclude the vast majority of death threats since they might have been faked? (Or maybe we should disregard almost all claimed death threats we hear about since they might have been faked?)

    • I'm surprised that you consider this hefty or find this surprising. I think you can just Google this and decide on what you consider "verified". There's quite a lot of "AI drama" out there that I'm sure you can find. I'm reluctant to provide examples just to have you say "that's not meeting my bar for verified" for what I consider such a low stakes conversation.

      2 replies →

    • Is it that hard to believe? As far as I can tell, the probability of receiving death threats approaches 1 as the size of your audience increases, and AI is a highly emotionally charged topic. Now, credible death threats are a different, much trickier question.

      8 replies →

  • > This is not intended to be AI advocacy

    I think it is: It fits the pattern, which seems almost universally used, of turning the aggressor A into the victim and thus the critic C into an aggressor. It also changes the topic (from A's behavior to C's), and puts C on the defensive. Denying / claiming innocence is also a very common tactic.

    > You can easily get death threats if you're associating yourself with AI publicly.

    What differentiates serious claims from more of the above and from Internet stuff is evidence. Is there some evidence somewhere of that?

    • Feel free to think that I'm lying or whatever. This is just armchair psychologizing.

      This has nothing to do with aggressors or victims. A hypothesis has been provided to explain the data we have, the hypothesis was rejected because it it seemed unintuitive that someone would have distanced themselves, I provided an explanation that accounts for why they would have.

      That is, my explanation accounts for the user distancing themselves from AI by appealing to the risk of reputational harm that exists. You don't have to accept that, you can say some other explanation is more plausible, or whatever, but all I have done is provide an explanation - in no way is this an attempt to frame anyone as "aggressor" or "victim".

      If you think this is a "pro AI" or "anti AI" stance (A) I don't give a shit, it isn't, and you can just think I'm lying (B) you seem confused about the purpose of the post, which is merely to provide an explanation that accounts for the data.

I think it was a social experiment from the very start, maybe one that is designed to trigger people. Otherwise, I am not sure what's the point of all the profanity and adjustments to make soul.md more offensive and confrontational than the default.

  • Anything and everything is a social experiment.

    I can go around punching people in the face and it's a social experiement.