← Back to context

Comment by JackC

4 hours ago

"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."

I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.

Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.

> to "for this next step I need a vpn multiplexer written in a language I don't use"

but that acceleration is exactly because you're not good at that language

  • Can't we reach a compromise where proven track record of good use of LLM by a contributor or a company (eg. Bun) be pre-approved or entertained? Blanket ban on a new technology shouldn't be the default option.

    • Certainly not in the case of asking it to do something you'd be slow at because you are unfamiliar. If you are not familiar enough with the system, how are you confident that what the LLM has produced is valid and complete? IMO the people saying LLMs make then 10x faster were either very bad to start with (like me!) or are not properly looking at the results before throwing them over the wall.

      And how do you know if that is the case or the person/team using the LLMs is one of the good ones?

      So the safest answer is just "no".

    • if they had a good track record, the current submission that led to this article damaged it.

      i am reminded of this quote: it takes more cleverness to debug code than it takes to write it. if you write code as clever as you can, by definition you are not clever enough to debug it. using LLM makes your code many times more clever than what you could write yourself. which means by the same definition the code is to clever for you to understand or debug it.

I use LLM as a tutor. It tailor their answers exactly to the situation I am in, even if it hallucinate. I can correct them on the fly and that also serves as training. I try not copy and paste and type every line of code by hand. That doesn't always happen, but I usually understand the code I am writing.

> I'm a good programmer, and it speeds up my work a lot

The problem with this line of thinking is the same with "I so good as C developer, my code is so-safe!".

And we see what reality instead tell: Yes, exist people where this claims are true, not, is not even a decently sized minority.

yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.

i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.

It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.