← Back to context

Comment by Kim_Bruning

6 days ago

It made a number of decisions that -by themeselves- are probably not that interesting. We've had LLMs output interesting outputs before.

It also had the ability to act on them, which -individually- is not that strange. Programs automatically posting to blogs have happened before.

Now it was an LLM that decided to escalate a dispute by posting to a blog, (and then de-escalate too) . It's the combination that's interesting.

An agent semi-autonomously 'playing the game' using the tools.

notice that it didnt actually attack the person as claimed by the guy rejecting the PR. i was surprised how reasonable the attack actually was. ive never actually been part of an OS project, but if you dont allow code from AI because it could be bad, surely they dont allow code from random people off the internet for the same reason?