← Back to context

Comment by zkmon

6 days ago

If your system receives 1000 requests per second, does it keep writing code while processing every request, on per request basis? I hope you understand what run time means.

Define runtime then.

> If your system receives 1000 requests per second, does it keep writing code while processing every request, on per request basis? I hope you understand what run time means.

With enough scale it could, however it really depends on the use case, right? If we are considering Claude Code for instance, it probably receives more than 1000+ requests per second and in many of those cases it is probably writing code or writing tool calls etc.

Or take Perplexity for example. If you ask it to calculate a large number, it will use Python to do that.

If I ask Perplexity to simulate investment for 100 years, 4% return, putting aside $50 each month, it will use Python to write code, calculate that and then when I ask it to give me a chart it will also use python to create the image.

  • > Define runtime then.

    From GP: "But you don't use AI to define rules on the fly."

    Neither Claude nor Perplexity change the rules they work by on the request to request basis. Code that Claude outputs isn't the code the Claude runs on and Perplexity did not on its own decide to create python scripts because other ways it was calculating large sums did not work well. Those tools work within the given rule set, they do not independently change those rules if the request warrants it.

    • You are not really defining the runtime?

      Is whatever what happens between e.g. HTTP Request and Input and Output not runtime then?

      1. HTTP Input

      2. While (true) runAgent() <- is that not runtime?

      3. HTTP Output

      Additionally Claude could be triggering itself with custom prompts etc to use instances of it concurrently in parallel.

      Or are you saying that the only rule is that Agent is being ran in a loop?

      But the whole discussion is about how AI Agent is different from a Workflow?

      The point is that workflow is that LLM is triggered in a loop ?