← Back to context

Comment by JimDabell

9 days ago

> When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

You don’t need to write tests for that, you need to write acceptance criteria.

> You don’t need to write tests for that, you need to write acceptance criteria.

Sir, those are called tests.

  • I see you have little experience with Scrum...

    Acceptance criteria is a human-readable text that the person specifying the software has to write to fill-up a field in Scrum tools and not at all guide the work of the developers.

    It's usually derived from the description by an algorithm (that the person writing it has to run on their mind), and any deviation from that algorithm should make the person edit the description instead to make the deviation go away.

    • > Acceptance criteria is a human-readable text that the person specifying the software has to write (...)

      You're not familiar with automated testing or BDD, are you?

      > (...) to fill-up a field in Scrum tools (..)

      It seems you are confusing test management software used to tracks manual tests with actual acceptance tests.

      This sort of confusion would be ok 20 years ago, but it has since went the way of the dodo.

      1 reply →

As in, a developer would write something in e.g. gherkin, and AI would automatically create the matching unit tests and the production code?

That would be interesting. Of course, gherkin tends to just be transpiled into generated code that is customized for the particular test, so I'm not sure how AI can really abstract it away too much.

  • All of this at the end reduces to a simple fact at the end of the discussion.

    You need some of way of precisely telling AI what to do. As it turns out there is only that much you can do with text. Come to think of it, you can write a whole book about a scenery, and yet 100 people will imagine it quite differently. And still that actual photograph would be totally different compared to the imagination of all those 100 people.

    As it turns out if you wish to describe something accurately enough, you have to write mathematical statements, in other words statements that reduce to true/false answers. We could skip to the end of the discussion here, and say you are better of either writing code directly or test cases.

    This is just people revisiting logic programming all over again.

    • > You need some of way of precisely telling AI what to do.

      I think this is the detail you are not getting quite right. The truth of the matter is that you don't need precision to get acceptable results, at least in 100% of the cases. As everything in software engineering, there is indeed "good enough".

      Also worth noting, LLMs allow anyone to improve upon "good enough".

      > As it turns out if you wish to describe something accurately enough, you have to write mathematical statements, in other words statements that reduce to true/false answers.

      Not really. Nothing prevents you to refer to high-level sets of requirements. For example, if you tell a LLM "enforce Google's style guide", you don't have to concern yourself with how many spaces are in a tab. LLMs have been migrating towards instruction files and prompt files for a while, too.

      1 reply →

  • I’m talking higher level than that. Think about the acceptance criteria you would put in a user story. I’m specifically responding to this:

    > When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

    You don’t need to personally write code that mechanically iterates over every possible state to remain in the driver’s seat. You need to describe the acceptance criteria.

    • > When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

      You're describing the happy path of BDD-style testing frameworks.

      6 replies →

    • I think your perspective is heavily influenced by the imperative paradigm where you actually write the state transition. Compare that to functional programming where you only describe the relation between the initial and final state. Or logic programming where you describe the properties of the final state and where it would find the elements with those properties in the initial state.

      Those does not involves writing state transitions. You are merely describing the acceptance criteria. Imperative is the norm because that's how computers works, but there are other abstractions that maps more to how people thinks. Or how the problem is already solved.

      3 replies →

  • > That would be interesting. Of course, gherkin tends to just be transpiled into generated code that is customized for the particular test, so I'm not sure how AI can really abstract it away too much.

    I don't think that's how gherkin is used. Take for example Cucumber. Cucumber only uses it's feature files to specify which steps a test should execute, whereas steps are pretty vanilla JavaScript code.

    In theory, nowadays all you need is a skeleton of your test project, including feature files specifying the scenarios you want to run, and prompt LLMs to fill in the steps required by your test scenarios.

    You can also use a LLM to generate feature files, but if the goal is to specify requirements and have a test suite enforce them, implicitly the scenarios are the starting point.