Comment by vga1

8 hours ago

It seems to me that people might be arguing from conflicting hidden premises here. "AI Coding" is a spectrum that could mean something as simple as letting the LLM proofread your changes and then act on those with your own human brain, or it could mean just telling the agent what you want and let it rip and tear until it is done.

If I do the latter and submit a PR to something like Zig, I'll be certainly caught doing it and rightfully chastised. If I do the former, my PR will be better without anybody besides myself having any way of knowing how it got better. Probably I do something in between when I contribute to open-source these days.

Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing, because I respect these people and projects that much. It feels like I would be doing something they find disgusting if my work has touched an LLM and I obviously don't want to do that to people I respect. But it's fine, there are plenty of things to do in the world even when some doors are closed.

I do not presume to have any say on Zig project's well argued decisions[0] -- I'm not really even their user let alone someone important like a contributor. Their point of preferring human contact is superb, frankly. Probably a different kind of problem in an open-source project staffed with a lot of remote working people, where human contact is scarce.

https://kristoff.it/blog/contributor-poker-and-ai/

Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing

in my projects i will reject any contribution that i do not understand. even if the contribution is handwritten by an expert developer. that developer will have to earn my trust like anyone else, like you would have too.

LLM contributions are non-deterministic, which means they can never be trusted.

therefore, if you use LLM to contribute, you can not earn my trust. if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute. because if you could understand it, then you could write it yourself. i want your personal contributions, not those of your LLM. i want contributions that the submitter actually understands. i want you to earn my trust by showing me that you understand what you are doing. i want you to grow your understanding of my project. none of this happens when you use LLMs.

if you are unable to make a contribution without the help of an LLM then you are not ready to contribute. try looking for smaller issues that you can work on instead until you learned enough to make larger contributions.

  • >because if you could understand it, then you could write it yourself.

    I accept most things you said there as valid opinions, but this is where the logic goes wrong.

    I use LLMs to give me more from the only resource (now that my basic and mid-level needs are largely met) that ultimately matters: time. That means that I need to waste far less time in front of the computer, typing code, and use far more time doing more useful things, like hobbies, art, being with my children.

    But as I said before, every project is obviously allowed to make their own rules, and contributors should obey those rules. There are plenty of projects that take both AI deniers and plenty of projects who prefer AI aficiandos.

    At least for now. My belief is that one those groups will fade away like horseback riding did, but we'll see. Perhaps you have heard the famous stages quoted by many different people in different forms: first an idea is ridiculed, then it's attacked, then it's accepted. Some open-source communities have clearly entered the attacking phase in the last year so.

    • you are saying that even if you understand the code, using an LLM saves you time writing it. fair enough[*]. the problem on my side still is that if you didn't write the code yourself, i have no evidence that you actually understood it. the only way to prove that you understand the code is to write it yourself. that's where the trust building comes in. you may actually understand the code, but i can't trust that you do.

      [*] in my opinion it takes more time to verify that the LLM code is correct than it takes to write it yourself. based on that, if you save time using an LLM then you didn't spend enough time to verify that the code is correct.

      Some open-source communities have clearly entered the attacking phase in the last year so

      i feel it's more like defense, but yes.