← Back to context

Comment by arjunbajaj

6 hours ago

I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

I think I’m going to use it as a guide for our own internal AI guideline. We hire a lot of contractors and the amount of just awful code we get is really taking a toll and slowing site buildouts.

I agree this could be a template that services like GitHub should propose, the same way as they suggest contributing and code of conduct templates.

I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

  • > Isn't that the end goal of these tools and companies producing them?

    It seems to be the goal. But they seem very far away from achieving that goal.

    One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.

  • Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.

    • No, the code is generated by a tool that's "smarter than people in many ways". So which parts of "thinking, testing, and clean up/rewrite" can we trust it with?

      3 replies →

  • This is such a good write-up and something I'm struggling with very hard. Does quality of code in the traditional sense even matter anymore if e.g. CC can work with said code anyway. I haven't had imposter's in a long time, but it's spiking hard now. Whenever i read or write code I feel like I'm an incompetent dev doing obsolete things.