Comment by klabb3

6 days ago

Yeah. Reviewing code thoroughly is extremely time consuming. When you review code from a human, you can look for choices they made fairly quickly - say they used framework X and language feature Y. Most importantly, you assume that they’ve checked certain things to work. This way, code review can be fast but it still isn’t thorough on its own. Most of it is trust and bureaucracy (big companies do this also to prevent malicious employees from smuggling in backdoors etc).

Code from LLMs that looks right, clean and even clever poses as competence but are prone to hallucinations and business logic errors. In the short term, these changes will pass through due to their appearance but contain more issues than a human would have with the same code. In the medium term, we just lose that signal - the assumptions we can make about the authors state of mind and comprehension. It’s already incredibly hard to distinguish solid points from nonsense, when the nonsense is laundered by an LLM.