This isn't limited to chat bots. There are clearly some developers experiencing it with coding agents. Maybe humans don't carry the cognitive capacity for so much, and so rapid, information-parsing; and we experience vulnerabilities (ie psychosis) with over consumption.
As I understand it - a person with psychosis is someone who has over-weighted perceptions that cannot be corrected with sensory input. Hence to "bring someone to their senses".
I've seen and thought there might be a few programmers maybe with a related (not psychosis) "ai mania" - where one thinks one changing the world and uniquely positioned. It's not that we're not capable in our small ways with network effects, or like a hand touch could begin the wave breaking (hello surfers!) what distinguishes this capacity for small effects having big impacts from the mania version is the mania version bears no subtle understanding of cause and effect. A person who is adept in understanding cause and effect usually there's quite a simple, beautiful and refined rule of thumb the know about it. "Less is more". Mania on the other hand proliferates outward - concept upon concept upon concept - and these concepts are removed from that cause and effect - playful interaction with nature. As a wise old sage hacker once said "Proof of concept, or get the fuck out"
Where mania occurs in contrast to grounded capacity that does change the world is in the aspect of responsibility - being able to respond or be responsive to something. Someone who changes the world with their maths equation will be able to be responsive, responsible and claim, a manic person there's a disconnect. Actually, from certain points of view with certain apps or mediums that have a claim to universality and claiming "how the world is" it looks like there's most definitely already some titans of industry and powerful people inside the feedback loop of mania.
The should just go for a walk. Call a childhood friend. Have a cup of tea. Come to their senses (literally).
it's good to remember we're just querying a datastructure.
Humans don’t exist as standalone individuals, and a lot of our experience is shaped by being in a larger graph of people. I suspect it’s under-appreciated how social our perception is: in order to tell each other about X, we need to classify reality, define X and separate it from non-X; and because we (directly or indirectly) talk to each other so much, because we generally don’t just shut off the part of ourselves that classifies reality, the shared map we have for the purposes of communication blends with (perhaps even becomes) our experience of reality.
So, to me, “to bring someone to their senses” is significantly about reinforcing a shared map through interpersonal connection—not unlike how before online forums it was much harder to maintain particularly unorthodox[0] worldviews: when exposure to a selection of people around you is non-optional, it tempers even the most extreme left-field takes, as humans (sans pathologies) are primed to mirror each other.
I’m not a psychologist, but likening chatbot psychosis to an ungrounded feedback loop rings true, except I would say human connection is the missing ground (although you could say it’s not grounded in senses or experience by proxy, per above). Arguably, one of the significant issues of chatbots is the illusion of human connection where there’s nothing but a data structure query; and I know that some people have no trouble treating the chat as just that, but somehow that doesn’t seem like a consolation: if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?
[0] As in, starkly misaligned with the community—which admittedly could be for better or for worse (isolated cults come to mind).
I recently rewatched "The Lawnmower Man" [0] and was not disappointed. The vast majority of the comments I see promoting the notion of "AI" achieving AGI sound like the Jobe Smith character from the movie.
If you have racing thoughts and some magic system responds to you and it's abstract enough (even people on hn do not know how LLMs work, plenty of them) then going for a walk is not enough...
Don’t know if it is the same for everyone. But when I experienced psychosis I definitely thought I was on a “higher plane” of thinking than others. That didn’t help me get a single idea through and of course it was all BS. So no, it definitely is not a desirable state of mind.
The interesting part to me is how this anti-feature could become the primary source of value for AI if only it was easier for everyone to run and train locally from a blank slate and without the clumsy natural language interface.
Take the example of music. Most musicians probably don't want crap like Suno. What they actually want is a fountain of ideas to riff on based on a locally trained AI where they have finer-grained control over the parameters and attention. Instead of "telling" the AI "more this and less that" would it not make more sense to surface facets of the training data in a more transparent and comprehensible way, and provide a slider control or ability to completely eliminate certain influences? I'm aware that's probably a tall order, but it's what's necessary.
Instead of producing delusions left to random chance and uncurated training data, we should be trying to guide AI towards clarity with the user in full control. The local training by the user effectively becomes a mirror of that user's artistic vision. It would be unique and not "owned" by anyone else.
This isn't limited to chat bots. There are clearly some developers experiencing it with coding agents. Maybe humans don't carry the cognitive capacity for so much, and so rapid, information-parsing; and we experience vulnerabilities (ie psychosis) with over consumption.
As I understand it - a person with psychosis is someone who has over-weighted perceptions that cannot be corrected with sensory input. Hence to "bring someone to their senses".
I've seen and thought there might be a few programmers maybe with a related (not psychosis) "ai mania" - where one thinks one changing the world and uniquely positioned. It's not that we're not capable in our small ways with network effects, or like a hand touch could begin the wave breaking (hello surfers!) what distinguishes this capacity for small effects having big impacts from the mania version is the mania version bears no subtle understanding of cause and effect. A person who is adept in understanding cause and effect usually there's quite a simple, beautiful and refined rule of thumb the know about it. "Less is more". Mania on the other hand proliferates outward - concept upon concept upon concept - and these concepts are removed from that cause and effect - playful interaction with nature. As a wise old sage hacker once said "Proof of concept, or get the fuck out"
Where mania occurs in contrast to grounded capacity that does change the world is in the aspect of responsibility - being able to respond or be responsive to something. Someone who changes the world with their maths equation will be able to be responsive, responsible and claim, a manic person there's a disconnect. Actually, from certain points of view with certain apps or mediums that have a claim to universality and claiming "how the world is" it looks like there's most definitely already some titans of industry and powerful people inside the feedback loop of mania.
The should just go for a walk. Call a childhood friend. Have a cup of tea. Come to their senses (literally).
it's good to remember we're just querying a datastructure.
Humans don’t exist as standalone individuals, and a lot of our experience is shaped by being in a larger graph of people. I suspect it’s under-appreciated how social our perception is: in order to tell each other about X, we need to classify reality, define X and separate it from non-X; and because we (directly or indirectly) talk to each other so much, because we generally don’t just shut off the part of ourselves that classifies reality, the shared map we have for the purposes of communication blends with (perhaps even becomes) our experience of reality.
So, to me, “to bring someone to their senses” is significantly about reinforcing a shared map through interpersonal connection—not unlike how before online forums it was much harder to maintain particularly unorthodox[0] worldviews: when exposure to a selection of people around you is non-optional, it tempers even the most extreme left-field takes, as humans (sans pathologies) are primed to mirror each other.
I’m not a psychologist, but likening chatbot psychosis to an ungrounded feedback loop rings true, except I would say human connection is the missing ground (although you could say it’s not grounded in senses or experience by proxy, per above). Arguably, one of the significant issues of chatbots is the illusion of human connection where there’s nothing but a data structure query; and I know that some people have no trouble treating the chat as just that, but somehow that doesn’t seem like a consolation: if treating that which quite successfully pretends to be a natural conversation with a human as nothing more than a data structure query comes so naturally to them, what does it say about how they see conversing with us, the actual humans around them?
[0] As in, starkly misaligned with the community—which admittedly could be for better or for worse (isolated cults come to mind).
I recently rewatched "The Lawnmower Man" [0] and was not disappointed. The vast majority of the comments I see promoting the notion of "AI" achieving AGI sound like the Jobe Smith character from the movie.
[0]: https://en.wikipedia.org/wiki/The_Lawnmower_Man_(film)
If you have racing thoughts and some magic system responds to you and it's abstract enough (even people on hn do not know how LLMs work, plenty of them) then going for a walk is not enough...
ten bucks says this condition ends up evolving in the same way that female hysteria did
what if llms are actually equivalent to humans in sentience ? wouldn't that make everyone psychotic except those in "chatbot psychosis" ?
Don’t know if it is the same for everyone. But when I experienced psychosis I definitely thought I was on a “higher plane” of thinking than others. That didn’t help me get a single idea through and of course it was all BS. So no, it definitely is not a desirable state of mind.
Besides encouraging suicide, many LLM also feed peoples delusions =3
https://www.youtube.com/watch?v=yftBiNu0ZNU
The interesting part to me is how this anti-feature could become the primary source of value for AI if only it was easier for everyone to run and train locally from a blank slate and without the clumsy natural language interface.
Take the example of music. Most musicians probably don't want crap like Suno. What they actually want is a fountain of ideas to riff on based on a locally trained AI where they have finer-grained control over the parameters and attention. Instead of "telling" the AI "more this and less that" would it not make more sense to surface facets of the training data in a more transparent and comprehensible way, and provide a slider control or ability to completely eliminate certain influences? I'm aware that's probably a tall order, but it's what's necessary.
Instead of producing delusions left to random chance and uncurated training data, we should be trying to guide AI towards clarity with the user in full control. The local training by the user effectively becomes a mirror of that user's artistic vision. It would be unique and not "owned" by anyone else.