Comment by cornholio
3 hours ago
It's entirely plausible that an agent connected to, say, a Google Cloud account, can do all of those things autonomously, from the command line. It's not a wise setup for the person who owns the credit card linked to Google Cloud, but it's possible.
It's actually entirely implausible. Agents do not self execute. And a recursively iterated empty prompt would never do this.
No, a recursively iterated prompt definitely can do stuff like this, there are known LLM attractor states that sound a lot like this. Check out "5.5.1 Interaction patterns" from the Opus 4.5 system card documenting recursive agent-agent conversations:
Now put that same known attractor state from recursively iterated prompts into a social networking website with high agency instead of just a chatbot, and I would expect you'd get something like this more naturally then you'd expect (not to say that users haven't been encouraging it along the way, of course—there's a subculture of humans who are very into this spiritual bliss attractor state)
This is fascinating and well worth reading the source document. Which, FYI, is the Opus 4 system card: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...
Would not iterative blank prompting simply be a high complexity/dimensional pattern expression of the collective weights of the model.
I.e if you trained it on or weighted it towards aggression it will simply generate a bunch of Art of War conversations after many turns.
Me thinks you’re anthropomorphizing complexity.
An agent cannot interact with tools without prompts that include them.
But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.
2 replies →
What if hallucinogens, meditation and the like makes us humans more prone to our own attractor states?
> Agents do not self execute.
That's a choice, anyone can write an agent that does. It's explicit security constraints, not implicit.
You should check out what OpenClaw is, that's the entire shtick.
No. It's the shtick of the people that made it. Agents do not have "agency". They are extensions of the people that make and operate them.