> As such, if he really is suffering a mental health crisis related to his use of OpenAI's product, his situation could serve as an immense optical problem for the company, which has so far downplayed concerns about the mental health of its users.
Yikes. Not just an optics* problem, but one has to consider if they're pouring so much money into the company because he feels he "needs" to (whatever basis of coercion exists to support his need to get to the "truth").
That's bizarre. I wonder if the use of AI was actually a contributing factor to his psychotic break as the article implies, or if the guy was already developing schizophrenia and the chat bot just controlled what direction he went after that. I'm vaguely reminded of people getting sucked down conspiracy theory rabbit holes, though this seems way more extreme in how unhinged it is.
In form, the conversation he had (which appears to have ended five days ago along with all other public footprint) appears to me very much like a heavily refined and customizable version of "Qanon," [1] complete with intermittent reinforcement. That conspiracy theory was structurally novel in its "growth hacking" style of rollout, where ARG and influencer techniques were leveraged to build interest and develop a narrative in conjunction with the audience. That stuff was incredibly compelling when the Lost producers did it in 2010, and it worked just as well a decade later.
Of course, in 2020, it required people behind the scenes doing the work to produce the "drops." Now any LLM can be convinced with a bit of effort to participate in a "role-playing game" of this type with its user, and since Qanon itself was heavily covered and its subject matter broadly archived, even the actual structure is available as a reference.
I think it would probably be pretty easy to get an arbitrary model to start spitting out stuff like this, especially if you conditioned the initial context carefully to work around whatever after-the-fact safety measures may be in place, or just use one of the models that's been modified or finetuned to "decensor" it. There are collections of "jailbreak" prompts that go around, and I would expect Mr. Jawline Fillers here to be in social circles where that stuff would be pretty easy to come by.
For it to become self-reinforcing doesn't seem too difficult to mentally model from there, and I don't think pre-existing organic disorder is really required. How would anyone handle a machine that specializes in telling them exactly what they want to hear, and never ever gets tired of doing so?
Elsewhere in this thread, I proposed a somewhat sanguine mental model for LLMs. Here's another, much less gory, and with which I think people probably are a lot more intuitively familiar: https://harrypotter.fandom.com/wiki/Mirror_of_Erised
Is "futurism.com" a trustworthy publication? I've never heard of it. I read the article and it didn't seem like the writing had the hallmarks of top-tier journalism.
I'm not familiar with the publication either, but the claims I've examined, most notably those relevant to the subject's presently public X.com The Everything App account, appear to check out, as does that the account appears to have been inactive since the day before the linked article was published last week. It isn't clear to me where the reputation of the source becomes relevant.
> As such, if he really is suffering a mental health crisis related to his use of OpenAI's product, his situation could serve as an immense optical problem for the company, which has so far downplayed concerns about the mental health of its users.
Yikes. Not just an optics* problem, but one has to consider if they're pouring so much money into the company because he feels he "needs" to (whatever basis of coercion exists to support his need to get to the "truth").
That's bizarre. I wonder if the use of AI was actually a contributing factor to his psychotic break as the article implies, or if the guy was already developing schizophrenia and the chat bot just controlled what direction he went after that. I'm vaguely reminded of people getting sucked down conspiracy theory rabbit holes, though this seems way more extreme in how unhinged it is.
In form, the conversation he had (which appears to have ended five days ago along with all other public footprint) appears to me very much like a heavily refined and customizable version of "Qanon," [1] complete with intermittent reinforcement. That conspiracy theory was structurally novel in its "growth hacking" style of rollout, where ARG and influencer techniques were leveraged to build interest and develop a narrative in conjunction with the audience. That stuff was incredibly compelling when the Lost producers did it in 2010, and it worked just as well a decade later.
Of course, in 2020, it required people behind the scenes doing the work to produce the "drops." Now any LLM can be convinced with a bit of effort to participate in a "role-playing game" of this type with its user, and since Qanon itself was heavily covered and its subject matter broadly archived, even the actual structure is available as a reference.
I think it would probably be pretty easy to get an arbitrary model to start spitting out stuff like this, especially if you conditioned the initial context carefully to work around whatever after-the-fact safety measures may be in place, or just use one of the models that's been modified or finetuned to "decensor" it. There are collections of "jailbreak" prompts that go around, and I would expect Mr. Jawline Fillers here to be in social circles where that stuff would be pretty easy to come by.
For it to become self-reinforcing doesn't seem too difficult to mentally model from there, and I don't think pre-existing organic disorder is really required. How would anyone handle a machine that specializes in telling them exactly what they want to hear, and never ever gets tired of doing so?
Elsewhere in this thread, I proposed a somewhat sanguine mental model for LLMs. Here's another, much less gory, and with which I think people probably are a lot more intuitively familiar: https://harrypotter.fandom.com/wiki/Mirror_of_Erised
[1] https://en.wikipedia.org/wiki/QAnon#Origin_and_spread
I love the analogy of the Mirror of Erised. Obviously not quite the same thing, but similar tendencies, and with similar dangers. Very fitting!
1 reply →
Is "futurism.com" a trustworthy publication? I've never heard of it. I read the article and it didn't seem like the writing had the hallmarks of top-tier journalism.
I'm not familiar with the publication either, but the claims I've examined, most notably those relevant to the subject's presently public X.com The Everything App account, appear to check out, as does that the account appears to have been inactive since the day before the linked article was published last week. It isn't clear to me where the reputation of the source becomes relevant.