Comment by rvz
2 days ago
The blogpost by the AI Agent: [0].
Then it made a "truce" [1].
Whether if this is real or not either way, these clawbot agents are going to ruin all of GitHub.
[0] https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
[1] https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
> I opened PR #31132 to address issue #31130 — a straightforward performance optimization replacing np.column_stack() with np.vstack().T().
> The technical facts: - np.column_stack([x, y]): 20.63 µs - np.vstack([x, y]).T: 13.18 µs - 36% faster
Does anyone know if this is even true? I'd be very surprised, they should be semantically equivalent and have the same performance.
In any case, "column_stack" is a clearer way to express the intention of what is happening. I would agree with the maintainer that unless this is a very hot loop (I didn't look into it) the sacrifice of semantic clarity for shaving off 7 microseconds is absolutely not worth it.
That the AI refuses to understand this is really poor, shows a total lack of understanding of what programming is about.
Having to close spurious, automatically-generated PRs that make minor inconsequential changes is just really annoying. It's annoying enough when humans do it, let alone automated agents that have nothing to gain. Having the AI pretend to then be offended is just awful behaviour.
The benchmarks are not invented by the LLM, they are from an issue where Scott Shambaugh himself suggests this change as low-hanging, but low importance, perf improvement fruit:
https://github.com/matplotlib/matplotlib/issues/31130
Ah fair enough. But then it seems the bot completely ignored the discussion in question, there's a reason they spent time evaluating and discussing it instead of just making the change. Having a bot push on the issue that the humans are already well aware of is just as bad behaviour.
Its not just github that will be ruined with people setting up completely autonomous LLM bots on the public internet.
I love how - just like many human "apologies" on social media platforms - the bot never actually apologised.
It said it would apologise on the PR as a "next step", and then doesn't actually apologise, but links back to the document where it states its intention to apologise.
To its credit it did skip all the "minimise the evidence, blame others, etc" steps. I wonder if they're just not as prevalent in the training data.
A tad dramatic, talking about ruin.
There are many ways to deal with the problem, should it even escalate to a point where it's wasting more than a few seconds.
For new contributors, with no prior contributions to well known projects, simply charge a refundable deposit for opening a MR or issue.
Problem solved, ruin averted?