← Back to context

Comment by uniq7

1 day ago

This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"

  • If I tell someone literally "What value do you have if you're just acting as a pipe to the AI?", I'm pretty sure my manager will schedule a quick 1:1 to ask me why I'm telling peers that they have no value.

    • Your manager should then have a meeting with those coworkers too, or their manager(s). Depending on whether the company's leadership position is "AI at all costs", they may reconsider if they realise blind trust in AI is creating problems.

Yeah it's tough. I tend to take the path of just responding with one line to their wall of text. What are they going to do, send a second wall of text?

I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.

I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?

> How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

You don’t. You keep these arguments handy for ignoring their output until it’s germane.

>How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..

> How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Embrace the tension. Tension is human.

  • Out of work I wouldn't mind, but I spend 8h/day there and I am forced to work with these people, so I'd prefer to keep the drama out so that I can focus on solving problems.

    The other person already demonstrated a lack of professionalism by sharing unverified AI slop so, in case of conflict, I wouldn't be surprised if they continued acting unprofessionally by spreading false rumors, unnecessarily escalating the situation to higher ups, secretly sabotaging the project, etc.

I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.

The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.

  • Ooh, I saw a very similar situation. User went on AI and asked "Which user was disrespectful first" to dunk on another.

    The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.

    • I asked Gemini what it thought, in one of the modes, it said bringing an Ai to a discussion is like bringing a gun to a knife fight, that using AI was like having a rhetorical weapon and advantage in what everyone thought was a human to human forum.