Comment by jcgrillo
7 hours ago
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.
This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
> LLMs are in this case enabling bad behavior
Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
People who made use of LLM responsibly to create high quality output doesn't look like they're using AI.
For example, using AI as an editor. It doesn't write anything for you and you try and avoid suggestions unless you're stuck.