← Back to context

Comment by jihadjihad

7 hours ago

s/Django/the codebase/g, and the point stands against any repo for which there is code review by humans:

> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.

> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.

> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.

> For a reviewer, it’s demoralizing to communicate with a facade of a human.

> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.

I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.

There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.

AI autocomplete and suggestions built-in to Jira are making our ticket tracker so goddamn spammy that I’m 100% sure that “feature” has done more harm than good.

I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”

I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)

  • > just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”

    In the pre-AI era it was much easier to identify people in the workplace who weren't paying attention to their work. To write something about a project you had to at minimum invest some time into understanding it, then think about it, then write something on the ticket, e-mail, or codebase.

    AI made it easy to bypass all of that and produce words or code that look plausible enough. Copy and paste into ChatGPT, copy and past the blob of text back out, click send, and now it's somebody else's problem to decipher it.

    It gets really bad when the next person starts copying it into their ChatGPT so they can copy and past a response back.

    There are entire groups of people just sending LLM slop back and forth and hoping that the project can be moved to someone else before the consequences catch up.

  • Ironically my favorite use of claude is removing caring about jira from my workflow. I already didn't care about it but now i dont have to spend any time on it.

    I treat jira like product owners treat the code. Which is infinitely humorous to me.

    • Horrible degrading take. Be the change you want to see. Don't fuel the fire that's burning you.

      If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.

      I also had to deal with "managers are just complicating things" or "users are stupid and don't understand anything"; do you think I complained? No, I had engineers barter trust of their ingenuity with trust of my wisdom, and brought them to customer calls and presented them to users almost like royalty, which made them incredibly respectful as soon as they saw what kind of crap users had to deal with.

      6 replies →

    • Teach me your ways. I’ve long wished for an actual, human secretary to handle that for me. The context-switching and digging around in a painful, slow interface (I don’t just mean Jira, 100% of the ones project managers find acceptable seem to have this quality) is such a productivity killer, and it’s so easy to miss important things in all the noise.

      1 reply →

    • This is a valuable comment. It's the exact demoralization that others fear we are headed.

In the old days, you could assume that a Par was being offered in good faith by someone who was really fixing a problem. You might disagree with the proposed solution and reject the PR as written, but you assumed good faith. AI has flipped that on its head. Now, everyone assumes they are interacting with an AI (or at least a human using one to generate all the content) and that the human has little to no understanding of what they are proposing. Ultimately, the broad use of AI erodes trust. And that’s a shame.

  • Well said. It is all about trust.

    Just like "etiquette" accomplishes no purpose except letting people easily figure out who put the effort into learning it, vs. who didn't.

    Back then this distinguished by class, but ironically, today where's so easy to learn, it finally distinguishes by merit.

> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.

Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.

  • It's slippery. You're swamped with low-effort PRs, can't possibly test and review all of them. You will become a visible bottleneck, and guess whether it's easier to defend quality vs. "blocking a lot of features" which "seem to work". If you're tied by your salary as a reviewer, you will have to let go, and at the same time you'll suffer the consequences of the "lack of oversight" when things go south.

    • The Board has decided that we can no longer afford artisanal, hand-crafted software, and that machine-made will suffice for nearly all use cases.

      Enshittification Enterprise Edition.

      1 reply →