Comment by djoldman
3 days ago
As a consequence of my profession, I understand how LLMs work under the hood.
I also know that we data and tech folks will probably never win the battle over anthropomorphization.
The average user of AI, nevermind folks who should know better, is so easily convinced that AI "knows," "thinks," "lies," "wants," "understands," etc. Add to this that all AI hosts push this perspective (and why not, it's the easiest white lie to get the user to act so that they get a lot of value), and there's really too much to fight against.
We're just gonna keep on running into this and it'll just be like when you take chemistry and physics and the teachers say, "it's not actually like this but we'll get to how some years down the line- just pretend this is true for the time being."
These discussions often end up resembling religious arguments. "We don't know how any of this works, but we can fathom an intelligent god doing it, therefore an intelligent god did it."
"We don't really know how human consciousness works, but the LLM resembles things we associate with thought, therefore it is thought."
I think most people would agree that the functioning of an LLM resembles human thought, but I think most people, even the ones who think that LLMs can think, would agree that LLMs don't think in the exact same way that a human brain does. At best, you can argue that whatever they are doing could be classified as "thought" because we barely have a good definition for the word in the first place.
I don't think I've heard anyone (beyond the most inane Twitterati) confidently state "therefore it is thought."
I hear a lot of people saying "it's certainly not and cannot be thought" and then "it's not exactly clear how to delineate these things or how to detect any delineations we might want."
You may know the mechanics, but you don't know how LLMs "work" because no one really understands (yet, hopefully).
I'm a neurologist, and as a consequence of my profession, I understand how humans work under the hood.
The average human is so easily convinced that humans "know", "think", "lie", "want", "understand", etc.
But really it's all just a probabilistic chain reaction of electrochemical and thermal interactions. There is literally nowhere in the brain's internals for anything like "knowing" or "thinking" or "lying" to happen!
Strange that we have to pretend otherwise
>I'm a neurologist, and as a consequence of my profession, I understand how humans work under the hood.
There you go again, auto-morphizing the meat-bags. Vroom vroom.
I upvoted you.
This is a fundamentally interesting point. Taking your comment as HN would advise, I totally agree.
I think genAI freaks a lot of people out because it makes them doubt what they thought made them special.
And to your comment, humans have always used words they reserve for humanity that indicates we're special: that we think, feel, etc... That we're human. Maybe we're not so special. Maybe that's scary to a lot of people.
And I upvoted you! Because that's an upstanding thing to do.
(And I was about to react with
"In 2025 , ironically, a lot of anti-anthropomorphization is actually anthropocentrism with a moustache."
I'll have to save it for the next debate)
It doesn't strike you as a bit...illogical to state in your first sentence that you "understand how humans work under the hood" and then go on to say that humans don't actually "understand" anything? Clearly everything at its basis is a chemical reaction, but the right reactions chained together create understanding, knowing, etc. I do believe that the human brain can be modeled by machines, but I don't believe LLMs are anywhere close to being on the right track.
>everything at its basis is a chemical reaction, but the right reactions chained together create understanding, knowing, etc
That was their point. Or rather, that the analogous argument about the underpinnings of LLMs is similarly unconvincing regarding the issue of thought or understanding.
1 reply →
There are no properties of matter or energy that can have a sense of self or experience qualia. Yet we all do. Denying the hard problem of consciousness just slows down our progress in discovering what it is.
We need a difference to discover what it is. How can we know that all LLMs don't?
4 replies →
(Hint: I am not denying the hard problem of consciousness ;) )