← Back to context

Comment by zarzavat

3 hours ago

It's about responsibility.

If I get pwned because my AI agent wrote code that had a security vulnerability, none of my users are going to accept the excuse that I used AI and it's a brave new world. I will get the blame, not Anthropic or OpenAI or Google but me.

The same goes for if my AI generated code leads to data loss, or downtime, or if uses too many resources, or it doesn't scale, or it gives out error messages like candy.

The buck stops with me and therefore I have to read the code, line-by-line, carefully.

It's not even a formality. I constantly find issues with AI generated code. These things are lazy and often just stub out code instead of making a sober determination of whether the functionality can be stubbed out or not.

You could say "just AI harder and get the AI to do the review", and I do this a lot, but reviewing is not a neutral activity. A review itself can be harmful if it flags spurious issues where the fix creates new problems. So I still have to go through the AI generated review issue-by-issue and weed out any harmful criticism.

On the other hand, I don’t need to review carefully every line of code in my thumbnail generator and associated UI.

My nonexistent backend isn’t going to be pwned if there is a bug in the thumbnail generation.

After the QA testing on my device, a quick scroll through of the code is enough.

Maybe prompt „are errors during thumbnail generation caught to prevent app crashes?“ if we‘re feeling extra cautious today.

And just like that it saved a day of work.