← Back to context

Comment by NobodyNada

9 days ago

> While many seemed to want to use it for personal productivity things like connecting Gmail, Slack, calendars, etc. that didn’t seem interesting to me much. I thought why not have it solve the mundane boring thigns that matter in opensource scientific codes and related packages.

This, here, is the root of the issue: "I'm not interested in using an AI agent for my own problems, I want to unleash it on other people's problems."

The author is trying to paint this as somehow providing altruistic contributions to the projects, but you don't even have to ask to know these contributions will be unwelcome. If maintainers wanted AI agent contributions, they would have just deployed the AI agents themselves. Setting up a bot on behalf of someone else without their consent or even knowledge is an outlandishly rude thing to do -- you wouldn't set up a code coverage bot or a linter to run on a stranger's GitHub project; why would anyone ever think this is okay?

This is the same kind of person who, when asked a question, responds with a copypasted ChatGPT reply. If I wanted the GPT answer, I would have just asked it directly! Being an unsolicited middleman between another person and an AI brings absolutely no value to anybody.

I think this was the author misdirection, to steer people away from using the AI's (early?) contributions to unmask their identity via personal repos. Or if they actually did this, as an opsec procedure - nothing altruistic about it. If GitHub wanted to, or was ordered to unmask Rat H. Bun's operator, they could.