Comment by foobiekr
4 hours ago
The problem is the bullshit asymmetry and engaging in good faith.
AI users aren’t investing actual work and can generate reams if bullshit that puts three burden on others to untangle. And they also aren’t engaging in good faith.
Some discussions are dialectic, where a group is cooperatively reasoning toward a shared truth. In dialectical discussions, good faith is crucial. AI can't participate in dialectical work. Most public discourse is not dialectical, it is rhetorical. The goal is to persuade the audience, not your interlocutor. You aren't "yelling into the void", you're advocating to the jury.
Rhetoric is the model used in debate. Proponents don't expect to change their Opponent's mind, and vice versa. In fact, if your opponent is obstinate (or a non-sentient text generator), it is easier to demonstrate the strength of your position to the gallery.
People reference Brandolini's "bullshit asymmetry principle" but don't differentiate between dialectical and rhetorical contexts. In a rhetorical context, the strategy is to demonstrate to the audience that your interlocutor is generating text with an indifference to truth. You can then pivot, forcing them to defend their method rather than making you debunk their claims.