Comment by simianwords

18 days ago

>This is no different from having an LLM pair where the first does something and the second one reviews it to “make sure no hallucinations”.

Absolutely not! This means you have not understood the point at all. The rest of your comment also suggests this.

Here's the real point: in scenario testing, you are relying on feedback from the environment for the LLM to understand whether the feature was implemented correctly or not.

This is the spectrum of choices you have, ordered by accuracy

1. on the base level, you just have an LLM writing the code for the feature

2. only slightly better - you can have another LLM verifying the code - this is literally similar to a second pass and you caught it correctly that its not that much better

3. what's slightly better is having the agent write the code and also give it access to compile commands so that it can get feedback and correct itself (important!)

4. what's even better is having the agent write automated tests and get feedback and correct itself

5. what's much better is having the agent come up with end to end test scenarios that directly use the product like a human would. maybe give it browser access and have it click buttons - make the LLM use feedback from here

6. finally, its best to have a human verify that everything works by replaying the scenario tests manually

I can empirically show you that this spectrum works as such. From 1 -> 6 the accuracy goes up. Do you disagree?

> what's much better is having THE AGENT come up with end to end test scenarios

There is no difference between an agent writing playwright tests and writing unit tests.

End-to-end tests ARE TESTS.

You can call them 'scenarios'; but.. waves arms wildly in the air like a crazy person those are tests. They're tests. They assert behavior. That's what a test is.

It's a test.

Your 'levels of accuracy' are:

1. <-- no tests 2. <-- llm critic multi-pass on generated output 3. <-- the agent uses non-model tooling (lint, compilers) to self correct 4. <-- the agent writes tests 5. <-- the agent writes end-to-end tests 6. <-- a human does the testing

Now, all of these are totally irrelevant to your point other than 4 and 5.

> I can empirically show...

Then show it.

I don't believe you can demonstrate a meaningful difference between (4) and (5).

The point I've made has not misunderstood your point.

There is no meaningful difference between having an agent write 'scenario' end-to-end tests, and writing unit tests.

It doesn't matter if the scenario tests are in cypress, or playwright, or just a text file that you give to an LLM with a browser MCP.

It's a test. It's written by an agent.

/shrug

  • > Now, all of these are totally irrelevant to your point other than 4 and 5.

    No it is completely relevant.

    I don't have empirical proof for 4 -> 5 but I assume you agree that there is meaningful difference between 1 -> 4?

    Do you disagree that an agent that simply writes code and uses a linter tool + unit tests is meaningfully different from an LLM that uses those tools but also uses the end product as a human would?

    In your previous example

    > Well, it could go, 'this is stupid, X-Country is not a thing, this feature is not implemented correctly'.

    ...but, it's far more likely it'll go 'I tried this with X-Country: America, and X-Country: Ukraine and no X-Country header and the feature is working as expected'.

    I could easily disprove this. But I can ask you what's the best way to disprove?

    "Well, it could go, 'this is stupid, X-Country is not a thing, this feature is not implemented correctly'"

    How this would work in end to end test is that it would send the X-Country header for those blocked countries and it verifies that the feature was not really blocked. Do you think the LLM can not handle this workflow? And that it would hallucinate even this simple thing?

    • > it would send the X-Country header for those blocked countries and it verifies that the feature was not really blocked.

      There is no reason to presume that the agent would successfully do this.

      You haven't tried it. You don't know. I haven't either, but I can guarantee it would fail; it's provable. The agent would fail at this task. That's what agents do. They fail at tasks from time to time. They are non-deterministic.

      If they never failed we wouldn't need tests <------- !!!!!!

      That's the whole point. Agents, RIGHT NOW, can generate code, but verifying that what they have created is correct is an unsolved problem.

      You have not solved it.

      All you are doing is taking one LLM, pointing at the output of the second LLM and saying 'check this'.

      That is step 2 on your accuracy list.

      > Do you disagree that an agent that simply writes code and uses a linter tool + unit tests is meaningfully different from an LLM that uses those tools but also uses the end product as a human would?

      I don't care about this argument. You keep trying to bring in irrelevant side points to this argument; I'm not playing that game.

      You said:

      > I can empirically show you that this spectrum works as such.

      And:

      > I don't have empirical proof for 4 -> 5

      I'm not playing this game.

      What you are, overall, asserting, is that END-TO-END tests, written by agents are reliable.

      -

      They. are. not.

      -

      You're not correct, but you're welcome to believe you are.

      All I can say is, the burden of proof is on you.

      Prove it to everyone by doing it.