Comment by necovek
10 months ago
"Let's identify specific, clear areas for improvement, and if they are not able to improve, let's fire them": it's as simple as that.
Teaching a human with motivation, potential and desire to learn is both easier, and more rewarding (for most humans), than attempting to teach LLM to write good code every time — humans tend to value their personal experiences more, whereas LLM relies more on the training corpus. So when I've seen people massage LLM output to be decent or excellent, it took them more time than it would have taken for them to write it from scratch without an LLM.
Which makes LLMs mostly a curiosity, and not a productivity booster. Can it get there? I hope it can, because that would be amazing.
None of this responds to what I just wrote. Can you engage with the question I asked directly? Thanks!
You asked:
This is directly answered with my first paragraph: that's exactly what I would think of them, and how I would act on it.
Your first question was:
In the second paragraph, I explained why it's better to do a code review for a crappy pull request that's human-produced vs LLM-generated: it is easier, faster, and more psychologically rewarding.
If you are talking about a case where an inexperienced human uses LLM to start off with a crappy code change, but then adapts the output during the review process, and potentially learns through it (though research confirms people learn better when they produce mistakes themselves) — they still won't be able to use LLM to produce comparable code the next time, so they'll have to do the review and improve it by hand before putting it up for review by somebody else, thus negating any productivity gain (which was the original premise), and likely reducing the learning potential.
If there was a question I misinterpreted, please enlighten me. Thanks! :)
Jim and Toby are interviewing Darryl Philbin for the position of manager at Dunder Mifflin Scranton. Jim asks what Darryl would do to resolve a conflict between two employees in the warehouse Darryl already managed. "I'll answer that, Jim. I would use it as an opportunity to teach... about actions... and consequences of actions".
That's the answer you just gave me. Good note! (Darryl didn't get the job.)
You're dodging my point. If you are managing a team where people are using LLMs to generate pull requests full of "crap" code (your word), you have a mismanaged team, and would with or without the LLMs, because on a well-managed team people don't create PRs full of crap code.
I'm fine if you want to say LLMs are dangerous tools in the hands of unseasoned developers. Fine, you can have a rule where only trusted developers get to use them. That actually seems pretty sane!
But a trustworthy developer using an LLM isn't going to be pushed by the LLM into creating "crap" PRs, because the LLM doesn't make the PRs, the developer does. If the developer isn't reading the code the LLM is producing, they're not doing their job.
Sometimes you get people saying "ok but reading that code is work so how is the LLM saving any time", which is something you could also say about adding any human developer to a team; their code also has to get reviewed.
So help me understand how your concerns here cohere.
1 reply →