Comment by iforgotpassword

5 days ago

And luckily the "plain English" the AI outputs is always 100% correct, so we don't have to worry about buggy python code down the line because those python devs got incorrect instructions. I mean how should they even verify anything, they're python devs, and perl will look like complete gibberish to them.

So they might have saved all that time, but what's gonna be the impact of incorrect reimplementation? What does that software do?

Ultimately it seems the question ought to be “is the code they wrote with AI buggier than the code they would have written without”, not “is the code they wrote with AI 100% bug free”. I doubt that any team doing a significant refactor from a language they don’t know could make bug free code on any reasonable timeline, AI or not.

If the question is the former, though, unless it’s horrendously buggy then I wonder if the speed increase offsets the “buggier code” (if the code even is buggier) because if they finish early they can bug bash for longer.

  • I guess it depends on wether the devs are able and willing to even still try to look at the old code, when they have a nice and easy to understand description in front of them what they're supposed to implement. And sure, at the end of the day management just cares about what costs less, including any accidents caused by AI giving the wrong description. Might also depend on who'll use that tech. If it's a bank, this could cost millions, if not billions. If it's a medical device (yeah I really don't think it is. I mean I really hope it isn't), it could cost lives. But at least then we can blame the AI, so nobody is at fault.

Existing QA and deployment best practices should mostly answer the question of "does the new one work like the old one". The difference here is that now the devs can understand what the new one ought to do much faster.