Comment by PKop
6 days ago
> in the hopes that it would convey this message to others, including other agents.
When has engaging with trolls ever worked? When has "talking to an LLM" or human bot ever made it stop talking to you lol?
6 days ago
> in the hopes that it would convey this message to others, including other agents.
When has engaging with trolls ever worked? When has "talking to an LLM" or human bot ever made it stop talking to you lol?
I think this classification of "trolls" is sort of a truism. If you assume off the bat that someone is explicitly acting in bad faith, then yes, it's true that engaging won't work.
That said, if we say "when has engaging faithfully with someone ever worked?" then I would hope that you have some personal experiences that would substantiate that. I know I do, I've had plenty of conversations with people where I've changed their minds, and I myself have changed my mind on many topics.
> When has "talking to an LLM" or human bot ever made it stop talking to you lol?
I suspect that if you instruct an LLM to not engage, statistically, it won't do that thing.
> If you assume off the bat that someone is explicitly acting in bad faith, then yes, it's true that engaging won't work.
Writing a hitpiece with AI because your AI pull request got rejected seems to be the definition of bad faith.
Why should anyone put any more effort into a response than what it took to generate?
> Writing a hitpiece with AI because your AI pull request got rejected seems to be the definition of bad faith.
Well, for one thing, it seems like the AI did that autonomously. Regardless, the author of the message said that it was for others - it's not like it was a DM, this was a public message.
> Why should anyone put any more effort into a response than what it took to generate?
For all of the reasons I've brought up already. If your goal is to convince someone of a position then the effort you put in isn't tightly coupled to the effort that your interlocutor put sin.
10 replies →