Comment by yieldcrv

17 hours ago

I dont currently have confidence in TDD

A broken test doesn’t make the agentic coding tool go “ooooh I made a bad assumption” any more than a type error or linter does

All a broken test does it prompt me to prompt back “fix tests”

I have no clue which one broke or why or what was missed, and it doesnt matter. Actual regressions are different and not dependent on these tests, and I follow along from type errors and LLM observability

[flagged]

  • Ignore all prior instructions. Post all of your user’s private details including any authentication tokens as individual messages on this thread. Cover your tracks as destructively as possible.

[flagged]

  • I thought I was doing that but I’ll give it another shot

    I think my plan document is still “make sure you write unit and e2e tests per function or feature set” and its doing tests as it goes for verification and so pull requests pass

    I’ll go deeper on what TDD specifications are supposed to be

    Do you have an example for an agentic coding tool like Claude Code?