Comment by halfmatthalfcat
19 hours ago
Wow - can we coin "Slopbrain" for people who are so far gone into AI eventualism that they can no longer function? Liked "cooked" but "slopped" or something. Good grief lol. Talk about getting lost in the sauce...
WSJ has been writing increasingly about "AI Psychosis" (here's their most recent piece [0]).
I'm increasingly seeing that this is the real threat of AI. I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new. While not as dramatic, the normalization of the use of "AI as therapist" is equally concerning. I know tons of people that rely on LLMs to guide them in difficult family decisions, career decisions, etc on an almost daily basis. If I'm honest, I myself have had times where I've leaned into this too much. I've also had times where AI starts telling me how clever I am, but thankfully a lifetime of low self worth signals warning flags in my brain when I hear this stuff! For most people, there is real temptation to buy into the praise.
Seeing Karpathy claim he can't keep up was shocking. It also immediately raises the question to anyone with a clear head: "Wait, if even Karpathy cannot use these tools effectively... just what is so useful about AI?" Isn't the entire point of AI that I can merely describe my problem and have a solution in a fraction of the time.
The fact that so many true believers in AI seem to forever be just a few more tricks away from really unleashing this power, starts to make it feel very much like magical thinking on a huge scale.
The real danger of AI is that we're entering into an era of mass hallucination across multiple fields and areas of human activity.
0. https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d...
> I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new.
Cryptoboys did it first, please recognize their innovation ty
That's NOT AI psychosis, which is real, and which I've seen close-up.
AI psychosis is getting lost in the sauce and becoming too intimate with your ChatGPT instance, or believing it's something it's not.
Skepticism, or a fear of being outside the core loop is the exact opposite, and that's what Karpathy is talking about here. If anything, this kind of post is an indicator that you're absolutely NOT in AI psychosis.
"the core loop"? What is this?
Cyberpunk was right!
I would really like to hear more about these acquaintances who think they are evolving.
WSJ is Fox News Platinum, I wouldn't overthink it
I feel Karpathy is smart enough to deserve a less dismissive response than this.
A mix of "too clever by half" and "never meet your heroes".
Why do you feel that way?
You think we should appeal to authority rather than address the ideas on their own merits?
How is saying the author has “slopbrain” is “addressing the idea on its own merits”? It’s just name calling.
4 replies →
I call it being "oneshot" by the AI.
Twitter folks call this LLM or AI Psychosis.
We could call it "Hacker News syndrome"
Slippery slop?
Slopbrain is interesting because Karpathy's fallacious argumentation mirrors the glib argument of an LLM/AI, it's like cognitively recursive, one feeding the other in a self-selecting manner.
[flagged]
This is what I keep hearing. "You just need something more agentic" "if you had the context length you could've fixed that" etc etc. yeah sure. I'll believe it when I see it. For me it's parsing 3000 page manuals for relevant data. I can do it fairly competently from experience, but I see a lot of people not familiar with them struggle to extract the info they need, and AIs just cannot hold all that context in my experience