← Back to context

Comment by user34283

2 days ago

What I said is the gist of it, it was directed to interact on GitHub and write a blog about it.

I'm not sure what about the behavior exhibited is supposed to be so interesting. It did what the prompt told it to.

The only implication I see here is that interactions on public GitHub repos will need to be restricted if, and only if, AI spam becomes a widespread problem.

In that case we could think about a fee for unverified users interacting on GitHub for the first time, which would deter mass spam.

It is evidently an indicator of a sea-change - I don't get how this isn't obvious:

Pre-2026: one human teaches another human how to "interact on Github and write a blog about it". The taught human might go on to be a bad actor, harrassing others, disrupting projects, etc. The internet, while imperfect, persists.

Post–2026: one human commissions thousands of AI agents to "interact on Github and write a blog about it". The public-facing internet becomes entirely unusable.

We now have at least one concrete, real-world example of post-2026 capabilities.

  • From that perspective it is interesting, alright.

    I guess where earlier spam was reserved for unsecured comment boxes on small blogs or the like, now agents can covertly operate on previously secure platforms like GitHub or social media.

    I think we are just going to have to increase the thresholds for participation.

    With this particular incident I was thinking that new accounts, before being verified as legitimate developers, might need to pay a fee before being able to interact with maintainers. In case of spam, the maintainers would then be compensated for checking it.