Comment by ang_cire

10 hours ago

I think that comment is conflating 2 different things: 1) people like you and I who use LLMs for exploring alternative methods to our own, and 2) people who treat LLMs like Stack Overflow answers they don't understand but trust because it's on SO.

Yes, there are tasks or parts of the code that I'm less interested in, and would happily either delegate or negotiate-off to someone else, but I wouldn't give those to a writer who happens to be able to write in a way that approximates program code, I'd give them to another dev on the team. A junior dev gets junior dev tasks, not tasks that they don't have the skills to actually perform, and LLMs aren't even at an actual junior dev level, imhe.

I noted in another comment that I've also used LLMs to get ideas for alternate ways to implement something, or to as you said "jump start" new files or programs. I'm usually not actually pasting that code into my IDE, though- I've tried that, and the work to make LLM-generated code fit into my programs is way more than just writing out the bits I need, where I need. That is clearly not the case for a lot of people using LLMs, though.

I've seen devs submitting PRs with giant blocks of clearly LLM-gen'd code, that they've tried to monkey-wrench into working with the rest of the program (often breaking conventions or secure coding standards). And these aren't junior devs, they're devs that have been working here for years and years.

When you force them to do a code review, they know it's not up to par, but there is a weird attitude that LLM-gen'd code is more acceptable to be submitted with issues than personally-written code. As though it's the LLM's fault or job to fix, even though they prompted and copied and badly-integrated and PR'd it.

And that's where I think there is a stark divide. I think you're on my side of the divide (at least, I didn't get the impression that you hate coding), it just sounds like you haven't really seen the other side.

My personal dime-store psych theory is that it's the same mental mechanism that non-technical computer users fall into of improperly trusting/ believing computers to produce correct information, but now happening to otherwise technical folks too because "AI" is still a black box technology to most of us, like computers in general are to non-techies.

LLMs are really really cool, and really really impressive to me, and I've had 'wow' moments where they did something that makes you forget what they are and how they work, but you can't let that emotional reaction towards it override the part that knows it's just a token chain. When you do, people end up (obviously on the very extreme end) 'dating' them, letting them make consequential "decisions", or just over-trusting their output/code.