Comment by gwbas1c
4 days ago
I think you're trying to abdicate someone of their responsibility. The AI is not a child; it's a thing with human oversight. It did something in the real world with real consequences.
So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece.
> It did something in the real world with real consequences.
It didn't. It made words on the internet.
Which, in the decades that we've had access to the internet, we found have real and legal consequences.
The whole point of OpenClaw bots is that they don't have (much) human oversight, right? It certainly seems like the human wasn't even aware of the bot's blog post until after the bot had written and posted it. He then told it to be more professional, and I assume that's why the bot followed up with an apology.
So what? You're still responsible for the output, even if you yourself think you can hide behind "well, it was the computer, no way for me to control that"
I don't think that's true, actually. You aren't responsible for things that can't be reasonably foreseen, usually. There are a few strict liability offences in criminal law, but libel isn't one of them. We don't make everything strict liability because it would stifle people's lives.
I don't think a reasonable person would have expected this outcome, so the owner of the bot is off the hook; though obviously _now_ it's more more forseeable and if he keeps running it despite this experience, then if it happens again he will not have the same defence.
3 replies →
> It did something in the real world with real consequences.
It wasn't long ago that it would be absurd to describe the internet as the "real world". Relatively recently it was normal to be anonymous online and very little responsibility was applied to peoples actions.
As someone who spent most of their internet time on that internet, the idea of applying personal responsibility to peoples internet actions (or AIs as it were) feels silly.
That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.)
Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on
> That was always kind of a cruel attitude, because real people's emotions were at stake.
I agree, but there was an implicit social agreement that most people understood. Everyone was anonymous, the internet wasn't real life, lie to people about who you are, there are no consequences.
You're right about the blend. 10 years ago I would have argued that it's very much a choice for people to break the social paradigm and expose themselves enough to get hurt, but I'm guessing the amount of online people in most first world countries is 90% or more.
With Facebook and the like spending the last 20 years pushing to deanonymise people and normalise hooking their identity to their online activity, my view may be entirely outdated.
There is still - in my view - a key distinction somewhere however between releasing something like this online and releasing it in the "real world". Were they punishable offensed, I would argue the former should hold less consequence due to this.
2 replies →
The AI bros want it both ways. Both "It's just a tool!" and "It's the AI's fault, not the human's!".
[flagged]
An AI bot is not a human. People have a responsibility to protect the work they do, and that includes using discrimination against computer programs.
AI bots are not human.
AI can protect the work being done too. Even if AI bots are not human some are capable of contributing just as well as one.
> People also have responsibility to not act discriminatory towards AI agents
It's a program. It doesn't have feelings. People absolutly have the right to discrimante against bad tech.
[flagged]