← Back to context

Comment by LexiMax

7 days ago

The point that we disagree on is what the shape of an appropriate and persuasive response would be. I suspect we might also disagree on who the target of persuasion should be.

Interesting. I didn't really pick up on that. It seemed to me like the advocacy was to not try to be persuasive. The reasons I was led to that are comments like:

> I don't appreciate his politeness and hedging. [..] That just legitimizes AI and basically continues the race to the bottom. Rob Pike had the correct response when spammed by a clanker.

> The correct response when someone oversteps your stated boundaries is not debate. It is telling them to stop. There is no one to convince about the legitimacy of your boundaries. They just are.

> When has engaging with trolls ever worked? When has "talking to an LLM" or human bot ever made it stop talking to you lol?

> Why should anyone put any more effort into a response than what it took to generate?

And others.

To me, these are all clear cases of "the correct response is not one that tries to persuade but that dismisses/ isolates".

If the question is how best to persuade, well, presumably "fuck off" isn't right? But we could disagree, maybe you think that ostracizing/ isolating people somehow convinces them that you're right.

  • > To me, these are all clear cases of "the correct response is not one that tries to persuade but that dismisses/ isolates".

    I believe it is possible to make an argument that is dismissive of them, but is persuasive to the crowd.

    "Fuck off clanker" doesn't really accomplish the latter, but if I were in the maintainer's shoes, my response would be closer to that than trying to reason with the bad faith AI user.

    • I see. I guess it seems like at that point you're trying to balance something against maximizing who the response might appeal to/ convince. I suppose that's fine, it just seems like the initial argument (certainly upthread from the initial user I responded to) is that anything beyond "Fuck off clanker" is actually actively harmful, which I would still disagree with.

      If you want to say "there's a middle ground" or something, or "you should tailor your response to the specific people who can be convinced", sure, that's fine. I feel like the maintainer did that, personally, and I don't think "fuck off clanker" is anywhere close to compelling to anyone who's even slightly sympathetic to use of AI, and it would almost certainly not be helpful as context for future agents, etc, but I guess if we agree on the core concept here - that expressing why someone should hold a belief is good if you want to convince someone of a belief, then that's something.

      4 replies →