← Back to context

Comment by razighter777

3 days ago

Hmm I think he's being a little harsh on the operator.

He was just messing around with $current_thing, whatever. People here are so serious, but there's worse stuff AI is already being used for as we speak from propaganda to mass surviellance and more. This was entertaining to read about at least and relatively harmless

At least let me have some fun before we get a future AI dystopia.

I think you're trying to abdicate someone of their responsibility. The AI is not a child; it's a thing with human oversight. It did something in the real world with real consequences.

So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece.

  • > It did something in the real world with real consequences.

    It didn't. It made words on the internet.

    • Which, in the decades that we've had access to the internet, we found have real and legal consequences.

  • The whole point of OpenClaw bots is that they don't have (much) human oversight, right? It certainly seems like the human wasn't even aware of the bot's blog post until after the bot had written and posted it. He then told it to be more professional, and I assume that's why the bot followed up with an apology.

    • So what? You're still responsible for the output, even if you yourself think you can hide behind "well, it was the computer, no way for me to control that"

      4 replies →

  • > It did something in the real world with real consequences.

    It wasn't long ago that it would be absurd to describe the internet as the "real world". Relatively recently it was normal to be anonymous online and very little responsibility was applied to peoples actions.

    As someone who spent most of their internet time on that internet, the idea of applying personal responsibility to peoples internet actions (or AIs as it were) feels silly.

    • That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.)

      Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on

      3 replies →

  • The AI bros want it both ways. Both "It's just a tool!" and "It's the AI's fault, not the human's!".

It might be because operator didn't terminate the agent right away when it had gone rogue.

  • From a wider stance, I have to say that it's actually nice that one can kill (murder?) a troublesome bot without consequences.

    We can't do that with humans, and there are much more problematic humans out there causing problems compared to this bot, and the abuse can go on for a long time unchecked.

    Remembering in particular a case where someone sent death threats to a Gentoo developer about 20 years ago. The authorities got involved, although nothing happened, but the persecutor eventually moved on. Turns out he wasn't just some random kid behind a computer. He owned a gun, and some years ago executed a mass shooting.

    Vague memories of really pernicious behavior on the Lisp newsgroup in the 90's. I won't name names as those folks are still around.

    Yeah, it does still suck, even if it is a bot.