Comment by furyofantares

8 hours ago

> Kernighan's Law, which says debugging is twice as hard as writing the code in the first place. Now people are increasingly believing that AI can debug way faster than human (most likely because other smart people have done similar debugging already). And in the worst case, just ask AI to rewrite the code.

I thought you were gonna go the opposite direction with this. Debugging is now 100x as hard as writing the code in the first place.

> Lehman’s Law, which states that as a system's complexity increases as it evolves, unless work is done to maintain or reduce it. Similar to above, people start to believe otherwise.

Gotta disagree with this too. I find a lot of work has to be done to be able to continue vibing, because complexity increases beyond LLM capabilities rapidly otherwise.

> I thought you were gonna go the opposite direction with this. Debugging is now 100x as hard as writing the code in the first place.

100x harder if a human were to debug AI-generated code. I was merely citing other people's beliefs: AI can largely, if not completely, take care of debugging. And "better", rewrite the code altogether. I don't see how that could be a better approach, but that might just be me.

  • I still run into plenty of situations where the LLM-agent wrote the code really inexpensively, but is totally unable to debug it, and you can sink tons of time trying to get it to do so before giving up with nothing to show for it, and trying to figure it out yourself.

    • What kind of code do you work on, and what model & harness do you use? Genuinely curious so I can calibrate my understanding.

      I work on enterprise web apps for a few dozen people with Codex CLI and GPT-5.4, and haven't really run in to those issues.

      1 reply →