Comment by japhyr
1 day ago
I get that this is a joke, but the bigger issue is that there's no easy fix for this because other humans are using AI tools in a way that destroys their ability to meaningfully work on a team with competent people.
There are a lot of people reading replies from more knowledgeable teammates, feeding those replies into LLMs, and pasting the response back to their teammates. It plays out in public on open source issue threads.
It's a big mess, and it's wasting so much of everyone's time.
As with every other problem with no easy fix, if it is important - it should be regulated. It should not be hard for a company to prohibit LLM-assisted communication, if management believes that it is inherently destructive (e.g. feeding generated messages into message summarizers).