Comment by tick_tock_tick
4 hours ago
Some of these are just straight up unhinged.
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
They're trying to avoid a Boy Who Cried Wolf situation.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
> If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
But theyre saying if they're 100 correct bug reports it's still banned.
That's hysterical
That's the baby out with the bathwater.
The assumption here is that people act in good faith. If you break the rules, this indicates that you are not acting in good faith, and perhaps should no longer be welcome.
Sounds very welcoming. I came here for the code, not to join some social club
What are you even talking about lol the policy doesn't imply that at all.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
See https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...