← Back to context

Comment by ctoth

2 days ago

> doesn't change the fact that it's software that requires human interaction to work.

Have you ever seen Claude Code launch a subagent? You've used it, right? You've seen it launch a subagent to do work? You understand that that is, in fact, Claude Code running itself, right?

I don't think subagents are representative of anything particularly interesting on the "agents can run themselves" front.

They're tool calls. Claude Code provides a tool that lets the model say effectively:

  run_in_subagent("Figure out where JWTs are created and report back")

The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.

It's a really useful parlor trick, but I don't think it tells us anything profound.

  • The mechanism being simple is the interesting part. If one large complex goal can be split into subgoals and the subgoals completed without you, then you need a lot fewer humans to do a lot more work.

    The OP says AI requires human interaction to work. This simply isn't true. You know yourself that as agents get more reliable you can delegate more to them, including having them launch more subagents, thereby getting more work done, with fewer and fewer humans. The unlock is the Task tool, but the power comes from the smarter and smarter models actually being able to delegate hierarchical tasks well!

    • You misunderstand.

      The only reason to launch subagents is to avoid poisoning the LLM's already small context window with unrelated tokens.

      It doesn't make the LLM smarter or more capable.

    • Wtf? A sub-agent is a tool you give an agent and say "If you need to analyze logs delegate to the logs_viewer agent" so that the context window doesn't fill up with hundreds of thousands of tokens unnecessarily. In what universe do you live in where that mechanism somehow means you need fewer humans?

      Do you think this means "Build a car" can be accomplished just because an LLM can send a prompt to another LLM who reports back a response?

My Linux server runs a cron job, that can spin off a thread and even use other ~apps~ tools. Did I invent AGI?

  • Does your Linux server decide what processes it should launch at what time with a theory of what will happen next in order to complete a goal you specified in natural language? If so yes, I reckon you sure have!

  • Maybe. But probably not. It doesn't matter if it's AGI though. If those other apps and tools do simple things that are predictable, then we can be pretty sure what will happen. If those tools can modify their own configuration and create new cron jobs, it becomes much harder to say anything about what will happen.

    • Most of us work on software that can modify its own configuration and create new jobs. I, too, have worked in ansible and terraform.

      The key break here is the lack of predictability and I think it's important that we don't get too starry eyed and accept that that might be a weakness - not a strength.

My claude has never yet launched itself from my terminal, gave itself a prompt, and then got to work. It has only ever spawned a sub-agent after I had given it a prompt. It was inert until a human got involved.

If that is software running itself, then an if statement that spawns a process conditionally is running itself.

Substance aside, I feel this comment is combative enough to be considered unhelpful. Patronizing and talking down to others convinces no one and only serves as a temporary source of emotional catharsis and a less temporary source of reputational damage.

You're using it and if someone else was using it the output would be different. The point is really that simple.

All AI requires steering as the results begin to decohere and self-enshittify over time.

AI in the hands of an expert operator is an exoskeleton. AI left alone is a stooge.

Nobody has built an all-AI operator capable of self-direction and choices superior to a human expert. When that happens, you'd better have your debts paid and bunker stocked.

We haven't seen any signs of this yet. I'm totally open to the idea of that happening in the short term (within 5 years), but I'm pessimistic it'll happen so quickly. It seems as though there are major missing pieces of the puzzle.

For now, AI is an exoskeleton. If you don't know how to pilot it, or if you turn the autopilot on and leave it alone, you're creating a mess.

This is still an AI maximalist perspective. One expert with AI tools can outperform multiple experts without AI assistance. It's just got a much longer time horizon on us being wholly replaced.

A one liner shell script can run itself.

  • One liner shell scripts can be analyzed. Some of them can be determined to not delete the production database. The others will not be executed.