Comment by agentultra
7 hours ago
I often come back to a quote from Tony Hoare (paraphrased):
There are two ways to write software: either it obviously has no errors or there are no obvious errors in it.
LLMs tend to generate the latter. Because that’s what’s in the training data: all the code that was rushed to production with a promise it would be fixed later. And humans are notoriously bad at catching these kinds of errors.
It feels bad using tools like this because it turns you into a reverse centaur. You’re there because the tool cannot be held accountable. You’re the last mile delivery driver shipping the code. You didn’t participate in its construction and you take all the responsibility for it.
Only there are studies which demonstrate how small of an impact code review has on code quality. And that after reading a few hundred lines of code in an hour the effect disappears. Current AI processes aren’t equipped to handle this.
Whatever we’re doing it’s heading in the opposite direction of engineering.
No comments yet
Contribute on Hacker News ↗