Comment by armchairhacker
3 days ago
OP’s code (at least plausibly) helped him. From https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...
> Damn, I can’t debug OCaml on my Mac because there’s no DWARF info…But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue…My needs are finally taken care of!
So I do believe using an LLM to generate a big feature like OP did can be very useful, so much that I’m expecting to see such cases more frequently soon. Perhaps in the future, everyone will be constantly generating big program/library extensions that are buggy except for their particular usecase, could be swapped with someone else’s non-public extensions that they generated for the same usecase, and must be re-generated each time the main program/library updates. And that’s OK, as long as the code generation doesn’t use too much energy or cause unforeseen problems. Even badly-written code is still useful when it works.
What’s probably not useful is submitting such code as a PR. Even if it works for its original use-case, it almost certainly still has bugs, and even ignoring bugs it adds tech debt (with bugs, the tech debt is significantly worse). Our code already depends on enough libraries that are complicated, buggy, and badly-written, to the extent that they slow development and make some feasible-sounding features infeasible; let’s not make it worse.
The whole issue, as clearly explained by the maintainers, isn't that the code is incorrect or not useful, it's the transfer of the burden of maintaining this large codebase to someone else. Basically: “I have this huge AI-generated pile of code that I haven't fully read, understood, or tested. Could you review, maintain, and fix it for me?”
> cause unforeseen problems
This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.
The point is that one-off LLM-generated projects don’t get support. If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, the people who decided to rely on it can pool a fund and hire real developers to fix it, probably by rewriting the entire thing from scratch. If a vibe-coded project becomes so popular that people start being pressured or indirectly forced to rely on it, then there’s an issue; but I’m saying that important shared codebases shouldn’t have unreviewed LLM-generated code, it’s OK for unimportant code like one-off features.
And people still shouldn’t be using LLM-generated projects when security or reliability is required. For mundane tasks, I can’t imagine worse security or reliability consequences from those projects, than existing projects that use small untrusted dependencies.
> The point is that one-off LLM-generated projects don’t get support.
Just sounds like more headaches for maintainers and those of us who provide support for FOSS. 5 hours into trying to pin down an issue and the user suddenly remembers they generated some code 3 years ago.
> If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, whoever decides to use it can pool a fund to hire real developers to fix it, probably by rewriting the entire thing from scratch.
Considering FOSS already has a funding problem, you seem very optimistic about this happening.
But none of that matters.
If LLMs can one shot a mostly working patch of some sort for your use case, and you can't be assed to take the effort to go through it and make sure it's rock solid and up to spec, then do not submit a PR with that code because that's stupid, and literally any other human being with a claude subscription can also one shot a mostly working patch for their needs
AI PRs are worthless, because if they are that good, nobody needs to share anything anymore anyway! If they aren't that good, they are spam.
The reason people keep committing giant LLM PRs is that they are deluded and morons, and somehow believe that both their ideas are magically important, LLMs trivially turn those ideas into quality output, and somehow nobody else can do that as well.
It's just ego. Believing that only YOU can contribute something produced by a machine that takes natural human language as input is asinine. Anyone can produce it. And if anyone can produce it, nobody needs YOU to submit a PR.
If you prompted an LLM to produce code, then so can the maintainers of the project. Why are you so full of yourself that you think they require you to generate a PR for them? Do you think OSS programmers don't know how to use LLMs?
1 reply →
> Even badly-written code is still useful when it works.
Sure, just as long as it's not used in production or to handle customer or other sensitive data. But for tools, utilities, weekend hack projects, coding challenges, etc by all means.
The statement preceding your quote is more telling:
> as long as the code generation doesn’t use too much energy or cause unforeseen problems.
A badly-written code can be a time bomb, just waiting for the right situation to explode.
And also, using LLM to generate garbage requires so much energy.
Exactly.
And yeah, people will start using AI for important things it’s not capable of…people have already started and will continue to do so regardless. We should find good ways for people to make their lives easier with AI, because people will always try to make their lives easier, so otherwise they’ll find bad ways themselves.