Comment by pavel_lishin

5 days ago

> the second prompt he sent revealed that the code had security issues, which the LLM presumably also fixed after finding them.

Maybe. Or maybe a third prompt would have found more. And more on the fourth. And none on the fifth, despite some existing.

You are the last barrier between the generated code and production. It would be silly to trust the LLM output blindly and not deeply think about how it could be wrong.

  • Which means they are nothing close to a 10x solution, which is what the author is saying.