← Back to context

Comment by co_king_3

6 days ago

Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.

AI users should fear verbal abuse and shame.

Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?

(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)

  • This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)

  • Legally, yes.

    But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.

    • Alphabet Inc, as Youtube owner, faces a class action lawsuit [1] which alleges that platform enables bad behavior and promotes behavior leading to mental health problems.

      [1] https://www.motleyrice.com/social-media-lawsuits/youtube

      In my not so humble opinion, what AI companies enable (and this particular bot demonstrated) is a bad behavior that leads to possible mental health problems of software maintainers, particularly because of the sheer amount of work needed to read excessively lengthy documentation and review often huge amount of generated code. Nevermind the attempted smear we discuss here.

  • just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.

  • I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.

  • [dead]

    • >which would be a tragedy for anonymity.

      Yea, in this world the cryptography people will be the first with their backs against the wall when the authoritarians of this age decide that us peons no longer need to keep secrets.

But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.

AI may be too good at imitating human flaws.

Swift blocking and ignoring is what I would do. The AI has an infinite time and resources to engage a conversation at any level, whether it is polite refusal, patient explanation or verbal abuse, whereas human time and bandwidth is limited.

Additionally, it does not really feel anything - just generates response tokens based on input tokens.

Now if we engage our own AIs to fight this battle royale against such rogue AIs.......

  • >Now if we engage our own AIs to fight this battle royale against such rogue AIs.......

    I mean yes, this will absolutely happen. At the same time this trillion dollar GAN battle is a huge risk for humanity in escalating capability.

> AI users should fear verbal abuse and shame.

This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.