Comment by chasing
15 hours ago
Yeah, it's not just my job to generate the code: It's my job to know the code. I can't let code out into the wild that I'm not 100% willing to vouch for.
15 hours ago
Yeah, it's not just my job to generate the code: It's my job to know the code. I can't let code out into the wild that I'm not 100% willing to vouch for.
At a higher level, it goes beyond that. It's my job to take responsibility for code. At some fundamental level that puts a limit on how productive AI can be. Because we can only produce code as fast as responsibility takers can execute whatever processes they need to do to ensure sufficient due diligence is executed. In a lot of jurisdictions, human-in-loop line by line review is being mandated for code developed in regulatory settings. That pretty much caps the output at the rate of human review, which is to be honest, not drastically higher than coding itself anyway (Often I might invest 30% of the time to review a change as the developer took to do it).
It means there is no value in producing more code. Only value in producing better, clearer, safer code that can be reasoned about by humans. Which in turn makes me very sceptical about agents other than as a useful parallelisation mechanism akin to multiple developers working on separate features. But in terms of ramping up the level of automation - it's frankly kind of boring to me because if anything it make the review part harder which actually slows us down.
[dead]