Comment by jumploops

11 hours ago

> For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls.

> K2.5 Agent Swarm improves performance on complex tasks through parallel, specialized execution [..] leads to an 80% reduction in end-to-end runtime

Not just RL on tool calling, but RL on agent orchestration, neat!

1,500 tool calls per task sounds like a nightmare for unit economics though. I've been optimizing my own agent workflows and even a few dozen steps makes it hard to keep margins positive, so I'm not sure how this is viable for anyone not burning VC cash.

  • "tool call" is just a reference to any elementary interaction with the outside system. It's not calling third-party APIs or anything like that.

> Kimi K2.5 can self-direct an agent swarm

Is this within the model? Or within the IDE/service that runs the model?

Because tool calling is mostly just the agent outputting "call tool X", and the IDE does it and returns the data back to AI's context

  • An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent.