← Back to context

Comment by tombert

9 hours ago

I use ChatGPT for small code snippets and to analyze error logs, but for more elaborate stuff I've had pretty bad luck.

Even prior to AI, I've said many times that code generation is evil [1]. I hated Ruby on Rails for this reason: people generate tons of files and then other people are stuck maintaining lots of code that they fundamentally do not understand. To a lesser extent, you also had this with IntelliJ generating large quantities of code as an attempt to make Java a little less irritating.

I once worked for a company that would generate Protobuf parsers and then monkey-patch extra functionality on top of it, and it was an extremely irritating and error-prone process.

The damage for this used to be limited to very specific code generation tools, but with LLMs, there's effectively no limit to how much code can be generated. This isn't inherently an issue, but like other code generation tools, it runs the risk of creating a lot of shitty code that no one actually understands. It's one thing if this is it's something low-stakes like a game or a TODO list app, but it's much more concerning with regards to banking and medical applications: if a lazy developer generates a large amount of code that looks more or less correct and it seems to more or less work but doesn't understand it, shit can get serious. I certainly would really prefer that the people writing the firmware for an EKG meter actually understand the the code they're writing.

[1] At least code that you're expected to edit and maintain. My opinion changes for stuff that's just an optimization detail.