← Back to context

Comment by giantrobot

2 years ago

> It's just intellectually dishonest to talk this way.

> They will still be helpful but we obviously need to test before we add code into systems. It goes without saying.

It's not intellectually dishonest at all. It's an issue of conditioning. There's a class of developers that blindly copy and paste code from StackOverflow or the first hit on Google. They're the same class that will uncritically copy and paste ChatGPT answers.

ChatGPT is worse than SO because it's adaptive. If someone pastes in a SO answer and it doesn't immediately work the developer has to at least engage with the code. ChatGPT can be asked to refine its hallucination until it parses/compiles.

The class of developer blindly copying and pasting answers will not have the expertise to spot hallucinations or likely even fix the inevitable bugs they introduce. Additionally ChatGPT by its nature elides the source of its answers. At the very least a SO answer has some provenance. Not only the poster but some social signally through votes that the answer is legitimate.

ChatGPT answers don't have any of that. It will also happily hallucinate references.

Conditioning junior developers and learners to rely on and trust AI coding is setting them up to fail. It's also going to stunt their growth as developers because they'll never gain any domain knowledge. In the meantime they'll be unknowingly sabotaging products with legit looking but broken code.

I should be worried that the very worst developers might paste bad code from ChatGPT and that's why it's dangerous? Looks an awful lot like mental gymnastics to me.