Comment by habinero
8 days ago
No, it's broadly true. Also, that's why we have code review and tests, so that it has to pass a couple of filters.
LLMs don't make mistakes like humans make mistakes.
If you're a SWE at my company, I can assume you have a baseline of skill and you tested the code yourself, so I'm trying to look for any edge cases or gaps or whatever that you might have missed. Do you have good enough tests to make both of us feel confident the code does what it appears to do?
With LLMs, I have to treat its code like it's a hostile adversary trying to sneak in subtle backdoors. I can't trust anything to be done honestly.
No comments yet
Contribute on Hacker News ↗