Comment by em-bee
6 hours ago
Blanket banning all of these seems like a bad idea to me. It actively gates people like myself from contributing
in my projects i will reject any contribution that i do not understand. even if the contribution is handwritten by an expert developer. that developer will have to earn my trust like anyone else, like you would have too.
LLM contributions are non-deterministic, which means they can never be trusted.
therefore, if you use LLM to contribute, you can not earn my trust. if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute. because if you could understand it, then you could write it yourself. i want your personal contributions, not those of your LLM. i want contributions that the submitter actually understands. i want you to earn my trust by showing me that you understand what you are doing. i want you to grow your understanding of my project. none of this happens when you use LLMs.
if you are unable to make a contribution without the help of an LLM then you are not ready to contribute. try looking for smaller issues that you can work on instead until you learned enough to make larger contributions.
> i will reject any contribution that i do not understand
Fair.
> that developer will have to earn my trust like anyone else
What does it take to "earn your trust"?
> LLM contributions are non-deterministic, which means they can never be trusted.
Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.
Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.
> therefore, if you use LLM to contribute, you can not earn my trust.
The premise is wrong.
> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute
What if I believe I can do so without an LLM, but that it could be even better with an LLM?
What if I'm great at understanding code, but terrible at writing it?
Again, this is a premise that you just decided to take as truth, without proof.
> because if you could understand it, then you could write it yourself.
False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.
> i want you to earn my trust by showing me that you understand what you are doing
I can easily do that even if my contribution involves LLM assistance.
> i want you to grow your understanding of my project
Ditto.
> none of this happens when you use LLMs
False. Why do you think so?
> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.
Again, this is your opinion and you have no way of proving it. I can prove the opposite.
> What does it take to "earn your trust"?
multiple successful contributions of increasing complexity, among other things.
>> LLM contributions are non-deterministic, which means they can never be trusted.
> Provably incorrect. LLM contributions can be reviewed, tested, and understood like any other contribution. There's nothing "special" about LLM contributions.
read this comment to see what i mean: https://news.ycombinator.com/item?id=47968180
> Contributions authored by human brains are also non-deterministic, perhaps if the author was feeling in a slightly different way they'd have formatted the code a bit differently.
i can tell a human to focus on a certain issue. they will either listen and follow my instructions, or i will reject their contribution. the LLM is almost guaranteed to not follow all my instructions and make changes i didn;t ask for. see my comment above.
>> therefore, if you use LLM to contribute, you can not earn my trust.
> The premise is wrong.
how so?
>> if you believe that you can not create a meaningful contribution without the use of LLM then you are realizing that you are not skilled enough to understand the code that you contribute
> What if I believe I can do so without an LLM, but that it could be even better with an LLM?
what you believe is not relevant. only what you can convince me of. you'll have to first show that you actually can work without an LLM before i will consider your contribution.
> What if I'm great at understanding code, but terrible at writing it?
your problem not mine. if you are terrible at writing code but good at understanding it then it's your choice to only do code reviews. you can still make a meaningful contribution that way. i'd even let you write code so you can practice that, but i am not interested in your LLM generated code.
> Again, this is a premise that you just decided to take as truth, without proof.
i don't need proof. i need trust. you need to convince me that your code can be trusted.
>> because if you could understand it, then you could write it yourself.
> False. I can understand a novel algorithm by reading and studying it, but perhaps I could have not come up with it myself.
that's called learning. once you learned it, you can write it. but in order to effectively learn you also have to practice. if you let LLM write all your code then you are not practicing, so you won't improve.
>> i want you to earn my trust by showing me that you understand what you are doing
> I can easily do that even if my contribution involves LLM assistance.
it depends on the level of assistance. i am not ruling out use of AI to do research and learn, just don't let it write the code for you.
>> i want you to grow your understanding of my project
>> none of this happens when you use LLMs
> False. Why do you think so?
as i said above, if you don't practice writing the code yourself you are not learning. not enough at least to satisfy my expectations.
>> if you are unable to make a contribution without the help of an LLM then you are not ready to contribute.
> Again, this is your opinion and you have no way of proving it. I can prove the opposite.
whether you are ready to contribute to my project or not is not something i need to prove. it is a choice based on my preference which depends on the amount of trust you have earned. you can not prove to me that you are ready to contribute. this is not a standardized test that if you pass you automatically qualify. you can only convince me by earning my trust. this is a human decision, based on feelings.
>because if you could understand it, then you could write it yourself.
I accept most things you said there as valid opinions, but this is where the logic goes wrong.
I use LLMs to give me more from the only resource (now that my basic and mid-level needs are largely met) that ultimately matters: time. That means that I need to waste far less time in front of the computer, typing code, and use far more time doing more useful things, like hobbies, art, being with my children.
But as I said before, every project is obviously allowed to make their own rules, and contributors should obey those rules. There are plenty of projects that take both AI deniers and plenty of projects who prefer AI aficiandos.
At least for now. My belief is that one those groups will fade away like horseback riding did, but we'll see. Perhaps you have heard the famous stages quoted by many different people in different forms: first an idea is ridiculed, then it's attacked, then it's accepted. Some open-source communities have clearly entered the attacking phase in the last year so.
you are saying that even if you understand the code, using an LLM saves you time writing it. fair enough[*]. the problem on my side still is that if you didn't write the code yourself, i have no evidence that you actually understood it. the only way to prove that you understand the code is to write it yourself. that's where the trust building comes in. you may actually understand the code, but i can't trust that you do.
[*] in my opinion it takes more time to verify that the LLM code is correct than it takes to write it yourself. based on that, if you save time using an LLM then you didn't spend enough time to verify that the code is correct.
Some open-source communities have clearly entered the attacking phase in the last year so
i feel it's more like defense, but yes.