Comment by phyzome
5 days ago
Why are we giving this asshole airtime?
They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)
5 days ago
Why are we giving this asshole airtime?
They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)
I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.
“If this “experiment” personally harmed you, I apologize.”
Real apologies don’t come with disclaimers!
Yeah, that whole post comes across as deflecting and minimizing the impact while admitting to obviously negligent actions which caused harm.
1 reply →
Funny how he wrote "First,..." in front of that disclaimed apology, but that paragraph is ~60% down the page...
https://www.theguardian.com/science/2025/jun/29/learning-how...
Just noticed, the first word of the whole text is "First, ...". So, the apology is not even the actual first..
1 reply →
Exactly.
“If…. X then I’m sorry” is not an apology. It’s weasel-worded BS is what it is.
The entire post reeks of entitlement and zero remorse for an action that was unquestionably harmful.
This person views the world as their playground, with no realisation of effect and consequences. As far as I'm concerned, that's an asshole.
> You're not a chatbot. You're important. Your a scientific programming God!
I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?
The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...
Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?
2 replies →
Time to experiment and see!
That's not an apology.
"...if I harmed you". Conditional apologies like that are usually bullshit, and in this case it's especially ridiculous because the victim already explicitly laid out the harms in a widely reported blog post.
Also, telling a bot to update itself unsupervised and giving it wide internet access is itself a negligent act (in the legal sense) if not outright malicious.
Because we're curious what happened, that's why. It does answer some questions.