Comment by aspenmartin

11 hours ago

because it has business context and better reasoning, and can ask humans for clarification and take direction.

You don't need to benchmark this, although it's important. We have clear scaling laws on true statistical performance that is monotonically related to any notion of what performance means.

I do benchmarks for a living and can attest: benchmarks are bad, but it doesn't matter for the point I'm trying to make.

I feel like you're missing the initial context of this conversation (no pun intended):

> Like for example a trusted user makes feedback -> feedback gets curated into a ticket by an AI agent, then turned into a PR by an Agent, then reviewed by an Agent, before being deployed by an Agent.

Once you add "humans for clarifications and take direction" then yeah, things can be useful, but that's far away from the non-human-involvment-loop earlier described in this thread, which is what people are pushing back against.

Of course, involving people makes things better, that's the entire point here, and that by removing the human, you won't get as good results. Going back to benchmarks, obviously involving humans aren't possible here, so again we're back to being unable to score these processes at all.

  • I'm confused on the scenario here. There is human in the loop, it's the feedback part...there is business context, it is either seeded or maintained by the human and expanded by the agent. The agent can make inferences about the world, especially when embodiment + better multimodal interaction is rolled out [embodiment taking longer].

    Benchmarks ==> it's absolutely not a given that humans can't be involved in the loop of performance measurement. Why would that be the case?

> because it has business context

It doesn't because it doesn't learn. Every time you run it, it's a new dawn with no knowledge of your business or your business context

> better reasoning

It doesn't have better reasoning beyond very localized decisions.

> and can ask humans for clarification and take direction.

And yet it doesn't, no matter how many .md file you throw at it, at crucial places in code.

> We have clear scaling laws on true statistical performance that is monotonically related to any notion of what performance means.

This is just a bunch of words stringed together, isn't it?

  • > It doesn't because it doesn't learn. Every time you run it, it's a new dawn with no knowledge of your business or your business context

    It does learn in context. And lack of continuous learning is temporary, that is a quirk of the current stack, expect this to change rather quickly. Also still not relevant, consider that agentic systems can be hierarchical and that they have no trouble being able to grok codebases or do internal searches effectively and this will only improve.

    > It doesn't have better reasoning beyond very localized decisions.

    Do you have any basis for this claim? It contradicts a large amount of direct evidence and measurement and theory.

    > This is just a bunch of words stringed together, isn't it?

    Maybe to yourself? Chinchilla scaling laws and RL scaling laws are measured very accurately based on next token test loss (Chinchilla). This scales very predictably. It is related to downstream performance, but that relationship is noisy but clearly monotonic

    • > It does learn in context

      It quite literally doesn't.

      It also doesn't help that every new context is a new dawn with no knowledge if things past.

      > Also still not relevant, consider that agentic systems can be hierarchical and that they have no trouble being able

      A bunch of Memento guys directing a bunch of other Memento guys don't make a robust system, or a system that learns, or a system that maintains and retains things like business context.

      > and this will only improve.

      We've heard this mantra for quite some time now.

      > Do you have any basis for this claim?

      Oh. Just the fact that in every single coding session even on a small 20kloc codebase I need to spend time cleaning up large amounts of duplicated code, undo quite a few wrong assumptions, and correct the agent when it goes on wild tangents and goose hunts.

      > Maybe to yourself? Chinchilla scaling laws a

      yap yap yap. The result is anything but your rosy description of these amazing reasoning learning systems that handle business context.

      4 replies →

  • Almost every task that people are tackling agents on, it’s either not worth doing, can be done better with scripts and software, or require human oversight (that negates all the advantages.

    • I assume this is a troll because it's just so far removed from reality there's not much to say. "Almost every task" -- I'm sure you have great data to back this up. "It's not worth doing" well sure if you want to put your head in the sand and ignore even what systems today can do let alone the improvement trajectory. "can be done better with scripts and software" .... not sure if you realize this but agents write scripts and software. "or require human oversight (that negates all the advantages." it certainly does not; human oversight vs actual humans implementing the code is pretty dramatically more efficient and productive.