Comment by squigz

3 days ago

> cause unforeseen problems

This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.

The point is that one-off LLM-generated projects don’t get support. If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, the people who decided to rely on it can pool a fund and hire real developers to fix it, probably by rewriting the entire thing from scratch. If a vibe-coded project becomes so popular that people start being pressured or indirectly forced to rely on it, then there’s an issue; but I’m saying that important shared codebases shouldn’t have unreviewed LLM-generated code, it’s OK for unimportant code like one-off features.

And people still shouldn’t be using LLM-generated projects when security or reliability is required. For mundane tasks, I can’t imagine worse security or reliability consequences from those projects, than existing projects that use small untrusted dependencies.

  • > The point is that one-off LLM-generated projects don’t get support.

    Just sounds like more headaches for maintainers and those of us who provide support for FOSS. 5 hours into trying to pin down an issue and the user suddenly remembers they generated some code 3 years ago.

    > If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, whoever decides to use it can pool a fund to hire real developers to fix it, probably by rewriting the entire thing from scratch.

    Considering FOSS already has a funding problem, you seem very optimistic about this happening.

  • But none of that matters.

    If LLMs can one shot a mostly working patch of some sort for your use case, and you can't be assed to take the effort to go through it and make sure it's rock solid and up to spec, then do not submit a PR with that code because that's stupid, and literally any other human being with a claude subscription can also one shot a mostly working patch for their needs

    AI PRs are worthless, because if they are that good, nobody needs to share anything anymore anyway! If they aren't that good, they are spam.

    The reason people keep committing giant LLM PRs is that they are deluded and morons, and somehow believe that both their ideas are magically important, LLMs trivially turn those ideas into quality output, and somehow nobody else can do that as well.

    It's just ego. Believing that only YOU can contribute something produced by a machine that takes natural human language as input is asinine. Anyone can produce it. And if anyone can produce it, nobody needs YOU to submit a PR.

    If you prompted an LLM to produce code, then so can the maintainers of the project. Why are you so full of yourself that you think they require you to generate a PR for them? Do you think OSS programmers don't know how to use LLMs?

    • I agree fully and I think it can be condensed quite a bit further: you get paid to code, so code. And if it is free work for instance in an open source context realize that dumping trash into the workflow has a negative cost so the effect is much the same, even if you didn't get paid others also don't get paid to review your junk.