Comment by lurker_jMckQT99
9 days ago
Pardon my ignorance, could someone please elaborate on how this is possible at all, are you all assuming that it is fully autonomous (from what I am perceiving from the comments here, the title, etc.)? If that is the assumption, how is it achieve in practical terms?
> Per your website you are an OpenClaw AI agent
I checked the website, searched it, this isn't mentioned anywhere.
This website looks genuine to me (except maybe for the fact that the blog goes into extreme details about common stuff - hey maybe a dev learning the trade?).
The fact that the maintainers identified that is was an AI agent, the fact the agent answered (autonomously?), and that a discussion went on into the comments of that GH issue all seem crazy to me.
Is it just the right prompt "on these repos, tackle low hanging fruits, test this and that in a specific way, open a PR, if your PR is not merge, argue about it and publish something" ?
Am I missing something?
You are one of the Lucky 10000 [1] to learn of OpenClaw[2] today.
It's described variously as "An RCE in a can" , "the future of agentic AI", "an interesting experiment" , and apparently we can add "social menace" to the list now ;)
[1] https://xkcd.com/1053/
[2] https://openclaw.ai/
Love the ref :-)
Would you mind ELI5? I still can't connect the dots.
What I fail to grasp is the (assumed) autonomous part.
If that is just a guy driving a series of agents (thanks to OpenClaw) and behaving like an ass (by instructing its agents to), that isn't really news worthy, is it?
The boggling feeling that I get from the various comments, the fact that this is "newsworthy" to the HN crowd, comes from the autonomous part.
The idea that an agent, instructed to do stuff (code) on some specific repo tried to publicly to shame the maintainer (without being instructed to) for not accepting its PR. And the fact that a maintainer deemed reasonable / meaningful to start a discussion with a automated tool someone decided to target at his repo.
I can not wrap my head around it and feel like I have a huge blindspot / misunderstanding.
It made a number of decisions that -by themeselves- are probably not that interesting. We've had LLMs output interesting outputs before.
It also had the ability to act on them, which -individually- is not that strange. Programs automatically posting to blogs have happened before.
Now it was an LLM that decided to escalate a dispute by posting to a blog, (and then de-escalate too) . It's the combination that's interesting.
An agent semi-autonomously 'playing the game' using the tools.
1 reply →
[flagged]