← Back to context

Comment by SuperV1234

4 hours ago

You'd have to review and verify even changes that you've written by hand. You might think that your hand-written code satisfies A+B+C+D+E, but until you've verified it, you cannot prove it.

That's not any different from LLM-assisted writing -- humans are inherently non-deterministic as well :)

The other fallacy is assuming that everyone else's experience with LLM-assisted writing is the same as yours. Personally, I've rarely encountered the issue you've mentioned -- most of my LLM-assisted coding has been a net positive and quite straightforward.

Perhaps it's the nature of the problem I'm working on, perhaps it's the model I chose, perhaps it's my prompting skills. It doesn't matter -- you just cannot assume that because something doesn't work for you it doesn't work for anyone else.

The other fallacy is considering LLM-assisted coding a binary option, like the nonsensical Zig policy does.

I agree with you that "vibe coding" something from scratch will likely result in poor quality and many iterations. But that's not the only way to use LLMs.

You can ask LLMs to review hand-written code. You can ask LLMs to optimize a specific part of code. You can ask LLMs to apply a specific refactor. You can ask LLMs to brainstorm solutions to a problem. You can ask LLMs to autocomplete patterns.

I could go on. This stuff works. It is helpful.

Assuming that everyone who uses LLMs is incompetent and preventing them from contributing because of a hunch or your own negative experiences is just asinine.

The other fallacy is assuming that everyone else's experience with LLM-assisted writing is the same as yours.

that's irrelevant. my choice can only be based on my experience. i am unable to verify your experience, because i am not you. we have different tolerances, and if it works for your project, then fine.

you just cannot assume that because something doesn't work for you it doesn't work for anyone else.

we are talking about contributions to my project. if LLM coding doesn't work for me, then your LLM created contributions won't work for me either because i won't trust them. you can't legislate or enforce trust. trust can only be earned. lack of trust means i have to spend more effort to verify your code.