The problem is that a lot of code works in general, but fails in edge cases. I would hate to be the guy who's job is only to find out why verbose AI generated code fails in one particular condition.
I read LLM generated code like I review a PR. I skim for anything that stands out as a common pitfall, and dig into the details of area I expect issues.
For most things I'm not willing to accept faster code at the expense of being an expert in the code.
So I am still trying to find the right amount of reading, editing, and reworking that gets the job done faster, where "the job" includes me being an expert in the produced code, not just the production of code.
There are periods of skimming but I'm doing a lot more than skimming.
The problem is that a lot of code works in general, but fails in edge cases. I would hate to be the guy who's job is only to find out why verbose AI generated code fails in one particular condition.
I read LLM generated code like I review a PR. I skim for anything that stands out as a common pitfall, and dig into the details of area I expect issues.
For most things I'm not willing to accept faster code at the expense of being an expert in the code.
So I am still trying to find the right amount of reading, editing, and reworking that gets the job done faster, where "the job" includes me being an expert in the produced code, not just the production of code.
There are periods of skimming but I'm doing a lot more than skimming.