Comment by estearum
3 days ago
I'm a neurologist, and as a consequence of my profession, I understand how humans work under the hood.
The average human is so easily convinced that humans "know", "think", "lie", "want", "understand", etc.
But really it's all just a probabilistic chain reaction of electrochemical and thermal interactions. There is literally nowhere in the brain's internals for anything like "knowing" or "thinking" or "lying" to happen!
Strange that we have to pretend otherwise
>I'm a neurologist, and as a consequence of my profession, I understand how humans work under the hood.
There you go again, auto-morphizing the meat-bags. Vroom vroom.
I upvoted you.
This is a fundamentally interesting point. Taking your comment as HN would advise, I totally agree.
I think genAI freaks a lot of people out because it makes them doubt what they thought made them special.
And to your comment, humans have always used words they reserve for humanity that indicates we're special: that we think, feel, etc... That we're human. Maybe we're not so special. Maybe that's scary to a lot of people.
And I upvoted you! Because that's an upstanding thing to do.
(And I was about to react with
"In 2025 , ironically, a lot of anti-anthropomorphization is actually anthropocentrism with a moustache."
I'll have to save it for the next debate)
It doesn't strike you as a bit...illogical to state in your first sentence that you "understand how humans work under the hood" and then go on to say that humans don't actually "understand" anything? Clearly everything at its basis is a chemical reaction, but the right reactions chained together create understanding, knowing, etc. I do believe that the human brain can be modeled by machines, but I don't believe LLMs are anywhere close to being on the right track.
>everything at its basis is a chemical reaction, but the right reactions chained together create understanding, knowing, etc
That was their point. Or rather, that the analogous argument about the underpinnings of LLMs is similarly unconvincing regarding the issue of thought or understanding.
Correct^ Thank you. I knew I was going out on a bit of a limb there :)
There are no properties of matter or energy that can have a sense of self or experience qualia. Yet we all do. Denying the hard problem of consciousness just slows down our progress in discovering what it is.
We need a difference to discover what it is. How can we know that all LLMs don't?
If you tediously work out the LLM math by hand, is the pen and paper conscious too?
Consciousness is not computation. You need something else.
2 replies →
Even if they do, it can only be transiently during the inference process. Unlike a brain that is constantly undergoing dynamic electrochemical processes, an LLM is just an inert pile of data except when the model is being executed.
(Hint: I am not denying the hard problem of consciousness ;) )