Comment by gordian-mind
4 hours ago
Explicit accusation that this was caused by chatbots + call for general regulation is right there in the article:
"AFP spoke to several members about their experiences. All warned that the world has to wake up to the threat unregulated AI chatbots pose to mental health.
Questions are also being asked about whether AI companies are doing enough to protect vulnerable people."
This, in time, might be used to nerf the models that we use. Of course, one actor is singled out:
"There has also been a recent rise in people spiralling while using Elon Musk's xAI's Grok chatbot, he said."
I don't think "correlation does not necessarily imply causation" even makes sense to someone saying "Maybe AI chatbots aren't great for people's mental health" or even "Are the AI companies actually trying to prevent AI chatbots being bad for people's mental health?", both statements seem fine and doesn't imply any causation as far as I understand.
This cautious statement, which would indeed be fine, is an invention of yours when it comes to the article. They assert causation, calling for AI companies to be disciplined and punished, praising the EU online censorship campaign, arguing this is a big experiment:
"Millar called for AI companies to be held responsible for the impact of their chatbots, saying the European Union has been more assertive in regulating Big Tech than the US or Canada.
He believes spirallers like him have unwittingly been caught in a massive global experiment."
They're calling for "AI companies to be held responsible for the impact of their chatbots" which regardless of what happened before, sounds like a reasonable thing to do, you don't even need to try to look at any correlation or causation to arrive at this.
I still don't see where the whole "correlation does not necessarily imply causation" comes in, so because this person was personally affected, they shouldn't reach the conclusion that AI companies need to be held responsible for whatever effects they have?