Comment by SOLAR_FIELDS
8 hours ago
Your first example should be solved by the maintainers outlining clear contribution guidelines. It’s not hard to point some automation at a pr and comment if someone didn’t follow contribution guidelines.
Nonmatching styles can be mostly solved with linting and static analysis
There’s no fix for bad code outside of manual review beyond that. But doing those things should significantly cut down on your noise.
Trust me it’s not being solved by a CONTRIBUTING.md with guidelines.
90% of contributors won’t read it at all or only some parts of it and ignore the rest.
Most PRs you get solve some super specific individual problem and aren’t relevant for the wider community that uses your OSS.
It’s not their fault really but most contributions are so bad it doesn’t serve to spend any time on reviewing them earnestly.
(Been maintaining several popular projects for the last 7 years)
Can I get your opinion on https://github.com/Judahmeek/frogs as an open-source maintainer?
Edit: https://judahmeek.com/p/we-need-frogs-to-defend-foss may be the better link to start with.
I haven't seen static analyis cover the things I'm concerned with.
Examples, calculating something twice instead of pulling the calculation out of the loop (one case) or into a separate function so that 2 separated places where it's calculated don't get out of sync (a different case). Another might be using an let x; if (cond) x = v1 else x = v2 (which is 3-9 lines depending on your brace style) vs const x = cond ? v1 : v2. When v1 and v2 are relatively simple expressons. I haven't seen a checker that will find stuff like this.