Comment by davps
3 years ago
Yes, I use it in that way. And if ChatGPT didn't generated the code with pure functions (usually it didn't), you can explicitly ask to generate the code with pure functions. Then ask to generate the tests.
Usually I get good tests from ChatGPT when I approach it as an iterative process, requesting multiple improvements to the generated test based on what it gives to me. Note that it doesn't replace the skillsets you need to know to write good test coverage.
For example, you can ask to generate integration tests instead of unit tests in case it need context. Providing details on how the testing code should be generated really helps. Also, asking to refactoring the code in preparation to make it testeable, for example, or requesting to convert some functions to actual pure functions, or requesting to refactor a piece of code it generated to a separate function. Then you ask to generate tests for normal and also for boundary conditions. The more specific you get, the chance of getting a good and extensive tests from it is much higher.
Having the tests and code generated by ChatGPT really helps to catch the subtle bugs it usually generates on the generated code (fixing it manually), usually I get test coverage that proof the robustness I needed for production code.
This approach still needs manual fine tuning of the generated code, I think ChatGPT still struggle to get the context right but in general, when it makes sense to use it, I'm more productive writing tests in this way than manually.
No comments yet
Contribute on Hacker News ↗