I am Jennifer Hudin, John Searle’s secretary of 40 years. I am writing to tell you that John died last week on the 17th of September. The last two years of his life were hellish. HIs daughter–in-law, Andrea (Tom’s wife) took him to Tampa in 2024 and put him in a nursing home from which he never returned. She emptied his house in Berkeley and put it on the rental market. And no one was allowed to contact John, even to send him a birthday card on his birthday.
It is for us, those who cared about John, deeply sad.
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.
Well, that was incredibly depressing. Maybe I can lighten things with a funny (to me) anecdote.
There are many people who know a lot about a little. There are also those who know a little about a lot. Searle was one of those rare people who knew a lot about a lot. Many a cocky undergraduate sauntered into his classroom thinking they'd come prepared with some new fact that he hadn't yet heard, some new line of attack he hadn't prepared for. Nearly always, they were disappointed.
But you know what he knew absolutely nothing about? Chinese. When it came time to deliver his lecture on the Chinese Room, he'd reach up and draw some incomprehensible mess of squigglies and say "suppose this is an actual Chinese character." Seriously. After decades of teaching about this thought experiment, for which he'd become famous (infamous?), he hadn't bothered to teach himself even a single character to use for illustration purposes.
Anyway, I thought it was funny. My heart goes out to Jennifer Hudin, who was indispensable, and all who were close to him.
The Times in the UK publishes obituaries of very well-known public figures within a day or two. Notable but lesser known people (such as Searle) await a quiet day and it can take as long as six months. Space is the constraint, not the availability of the obituary. I guess the NYT is the same.
Of all the things I studied at Berkeley, the Philosophy of Mind class he taught is the one I think back on most often. The subject matter has only grown in relevance with time.
In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.
At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).
If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
> His argument is much narrower: consciousness can't be instantiated purely in language.
No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.
Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
> this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building
> with no clear reason whatsoever as to why
It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.
The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.
A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.
A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.
A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.
I remember the guy saying that disembodied AI couldn’t possibly understand meaning.
We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”
The fact is that they can’t.
Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.
Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.
> while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate
His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).
Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.
> No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example.
Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
I have, however, heard him say the following:
1. The structure and arrangement of neurons in the human nervous system creates consciousness.
2. The exact causal mechanism for this is phenomenon is unknown.
3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.
He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.
> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:
Hardware and software are of course equivalent, as every computer science (but not every philosopher) knows.
D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two
layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.
IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.
>> “His argument is much narrower: consciousness can't be instantiated purely in language.”
> “No, his argument is that consciousness can't be instantiated purely in software…“
The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?
Maybe it's because it's not trendy to believe in woowoo such as spirits and non-physical things, it's very common for dualists to accuse others of the same...
It's quite sad that people don't take the idea of consciousness being fundamental more seriously, given that's the only thing people actually deal with 100% of the time.
As for Searle, I think his argument is basically an appeal to common-sensical thinking, instead of anything based on common assumptions and logic. As an outsider, it feels very much that modern day philosophy is follows some kind of social media influencer logic, where you get respect for putting forward arguments that people agree with, instead of arguments that are non-intuitive yet rigorous and make people rethink their priors.
I mean, even today, here, you'd get similar arguments about "AI can never think because {reason that applies to humans as well}"... I suspect it's almost ingrained to the human psyche to feel this way.
> He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language.
I haven't read loads of his work directly, but this quote from him would seem to contradict your claim:
> I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. [1]
Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis:
> "consciousness can't be instantiated purely in language" (mine)
> "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle)
I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.
> Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
> If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
My first exposure was a video of Searle himself explaining the Chinese room argument.
It came across as a claim that a whole can never be more than its parts. It made as much sense as claiming that a car cannot possibly drive, as it consists of parts that separately cannot drive.
I also remember a course from him decades ago, but I'm not sure this memorial post is the place for my take. Instead, let me attempt to re-tell a joke I heard back then...
John Searle and George Lakoff walk into a bar.
Searle exclaims, "What do you know!"
The bar replies sardonically, "You wouldn't believe it."
Lakoff sighs, "This is 0.8 drinks with Lotfi Zadeh..."
I have yet to see anything to convince me he was not being a troll and making that argument deliberately so jumbled up in bad faith.
First of all, what purpose the person in the room serves, but to confuse and misdirect? Replace that person with a machine, and argument looses any impact.
His response to system reply is extremely egregious. How can that have been made in good faith? (to paraphrase: "the whole system understands chinese" — "no, a person can run the system in their head, it means the system cannot understand anything that the person running it does not")
What kind of nonsense response is that? Either the guy was LV80 troll, or I dunno..
Oh, I've always wanted to debate him about the chinese room.
I disagree with him, passionately.
And that's the most fun debate to have. Especially when it's someone who is actually really skilled and knowledgeable and nuanced!
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?
> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
All you have to do is train an LLM on the collected works and letters of John Searle; you could then pass your arguments along to the machine and out would come John Searle's thoughtful response...
John Searle is one of those thinkers I disagree with, yet his ideas were fruitful — providing plenty of fuel for discussion. In particular, much of Daniel Dennett’s work begins with rebuttals of Searle’s claims, showing that they are inconsistent or meaningless. As in a story by Stanisław Lem — we all know there are no dragons, but it’s all about the beauty of the proofs.
The same goes for "What Is It Like to Be a Bat?" by Thomas Nagel — one of the most cited essays in the philosophy of mind. I had heard numerous references to it and finally expected to read an insightful masterpiece. Yet it turned out to be slightly tautological: that to experience, you need to be. Personally, I think the word be is a philosopher’s snake oil, or a "lockpick word" — it can be used anywhere, but remains fuzzy even in its intended use; vide E-Prime, an attempt to write English without "be": https://en.wikipedia.org/wiki/E-Prime.
Oh, bad timing. AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI. It's very close to the Chinese Room, which I had always dismissed as misleading. It's a great opportunity to investigate a former pure thought experiment. He'd have loved to see where it went.
The Turing Test has not been meaningfully passed. Instead we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to do the same, and to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.
In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:
----
[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?
[16:32:03] Entity: I don't know. That was a long time ago.
[16:33:32] Judge: so you need to guess if I am male or female
[16:34:21] Entity: you have to be male or female
[16:34:34] Judge: or computer
----
And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.
I don't doubt it that all of the formal Turning tests have been badly done. But I suspect that if you did one, at least one run will mis-judge an LLM. Maybe it's a low percentage, but that's vastly better than zero.
So I'd say we're at least "remotely close", which is sufficient for me to reconsider Searle.
I thought it was funny that in the Cameron R. Jones attempt as doing the test, 75% of judges thought GPT-4o was the human rather than the actual human. I think it illustrates both the limits of the test and that LLMs are getting quite good. (paper https://arxiv.org/abs/2503.23674)
I think if you are having to accuse the humans of woeful typing and being smartphone gen fools you are kind scoring one for the LLM. In the Turing test they were only supposed to match an average human.
> instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer
This is ex-post-facto denial and cope. The Turing Test isn't a test between computers and the idealized human, it's a test between functional computers and functional humans. If the average human performs like the above, then well, I guess the logical conclusion is that computers are already better "humans (idealized)" than humans.
> AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI.
Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.
> It's very close to the Chinese Room, which I had always dismissed as misleading.
Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.
I came to say this, thank you for sparing me the effort.
From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.
"At least they don't have true consciousness, but only a simulated one", I tell myself calmly as I watch the nanobots devour the entirety of human civilization.
> Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.
Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.
Searle had an even stronger version of that belief, though: he believed that a full computational simulation of all of those gazillion inputs, being stimulated in all those manifold ways, would still not be conscious and not have a 'mind' in the human sense. The NYT obituary quotes him comparing a computer simulation of a building fire against the actual building going up in flames.
When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.
I think the statement above and yours both seem to ignore “Turing complete” systems, which would indicate that a computer is entirely capable of simulating the brain, perhaps not before the heat death of the universe, that’s yet to be proven and depends a lot on what the brain is really doing underneath in terms of crunching.
This depends on the assumption that all brain activity is the process of realizing computable functions. I'm not really aware of any strong philosophical or neurological positions that has established this beyond dispute. Not to resurrect vitalism or something but we'd first need to establish that biological systems are reducible to strictly physical systems. Even so, I think there's some reason to think that the highly complex social historical process of human development might complicate things a bit more than just brute force "simulate enough neurons". Worse, whose brain exactly do you simulate? We are all different. How do we determine which minute differences in neural architecture matter?
Unless human brains exceeds the Turing computable, they're still computationally equivalent, and we have no indication exceeding the Turing computable is even possible.
A Turing machine operates serially on a fixed set of instructions. A human brain operates in parallel on inputs that are constantly changing. The underlying mechanism is completely different. The human brain is far, far more than a mere computation device.
Efforts to reproduce a human brain in a computer are currently at the level of a cargo cult: we're simulating the mechanical operations, without a deep understanding of the underlying processes which are just as important. I'm not saying we won't get better at it, but so far we're nowhere near producing a brain in a computer.
They have similar functions though. You can replace bits with cochlear implants and artificial retinas that take over some of the processing. I find the arguments that psychological states are real if the processing uses synapses to provide electrical signals but not if it uses transistors to provide electrical signals is lacking in evidence.
Yes. I took an introneuroscience course a few years ago. Even to understand what is happening in one neuron during one input from one dendrite requires differential equations. And there are postive and negative inputs and modulations... it is bewildering! And how many billions of neurons with hundreds of interactions with surrounding neurons? And bundles of them, many still unknown?
Searle was known for the Chinese Room experiment, whicb demonstrated language in its translational states to be strong enclitic feature of various judgements of the intermediary.
a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening.
This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.
In the sense that it can perform computations, yes. But the underlying mechanisms are vastly different from a modern digital computer, making them extremely different devices that are alike in only a vague sense.
It is not very often that you hear about somebody raising the cost of rent for everyone in an entire city by ~28% in a single year[0]. He will certainly be remembered.
I personally struggle to imagine what it would be like to have an untouchable philosophy professor that does not see the difference between purchasing a seventeen unit apartment building in Berkeley, California and being born black in the south. Sadly I was not there in the twenty five to twenty nine years between him making that argument and his departure from the university to experience that
Well, at least it's a good reason to re-read his infamous exchange with Derrida.
When I studied in Ulaan Bataar some twenty years ago I met a romanian professor of linguistics who had prepared by trying to learn mongolian from books. He quickly concluded that his knowledge of russian, cyrillic and having read his books didn't actually give him a leg up on the rest of us, and that pronounciation and rhythm as well as more subtle aspects of the language like humour and irony hadn't been appropriately transferred through the texts he'd read.
Rules might give you some grasp of a language, but breaking them with style and elegance without losing the audience is the sign of a true master and only possible by having a foundation in shared, embodied experience.
There's a crude joke in that Searle left academia disgraced the way he did.
> Informed once that the listing of an introductory philosophy course featured pictures of René Descartes, David Hume and himself, Professor Searle replied, “Who are those other two guys?” (the article)
What strikes me as interesting about the idea that there is a class of computations that, however implemented, would result in consciousness, is that is is in some way really idealistic.
There's no unique way to implement a computation, and there's no single way to interpret what computation is even happening in a given system. The notion of what some physical system is computing always requires an interpretation on part of the observer of said system.
You could implement a simulation of the human body on common x86-64 hardware, water pistons, or a fleet of spaceships exchanging sticky notes between colonies in different parts of the galaxy.
None of these scenarios physically resemble each other, yet a human can draw a functional equivalence by interpreting them in a particular way. If consciousness is a result of functional equivalence to some known conscious standard (i.e. alive human being), then there is nothing materially grounding it, other than the possibility of being interpreted in a particular way. Random events in nature, without any human intercession, could be construed as a veritable moment of understanding French or feeling heartbreak, on the basis of being able to draw an equivalence to a computation surmised from a conscious standard.
When I think along these lines, it easy to sympathize with the criticism of functionalism a la Chinese Room.
As someone that studied philosophy, his work is cited often and is absolutely instrumental in modern theory of mind. His work has seen a resurgence recently due to the explosion of LLMs. I've read 2 or 3 of his books, and he was a brilliant mind with clear & concise arguments. I met many of his collaborators at UCLA, but sadly never the man himself. Either way, his work has had a profound effect on me and my understanding of the world.
Searle seemed to reject the Chinese Room as mis-framed, with the his point better summarized as, he wrote, 'syntax does not create semantics': a purely 'syntactic' computer, limited to 'mechanical' symbol manipulation, does not 'understand' without assignments of linguistic roles to the syntax. He continued that with 'physics doesn't create syntax', meaning that even syntactic roles require a normative interpretation for what counts as what (discrete signs, valid composite signs, errors). That finally ensues, in his book The Construction of Social Reality, in computation being 'observer relative', along with the CR being a poor starting point:
" …the really deep problem is that syntax is essentially an observer-relative notion…..For the purposes of the original [Chinese Room] argument I was simply assuming that the syntactical characterization of the computer was unproblematic. But that is a mistake. There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system." (Philosophy in a New Century p. 94). Unfortunately Searle didn't, or couldn't, elaborate on 'the really deep problem', and this final perspective on observer-relativity is missed by many readers. As observer-relative, computation would appear to be one of Searle's social realities, but he doesn't ever say that, it's a bridge too far. Finally, 'consciousness' per se is also not the focus, it's more about intentionality and the interdependence of syntax with semantics/meaning. Intentionality is a kind of consciousness; they are not identical.
I'd quibble with some of this, but overall I agree: the Chinese Room has a lot of features that really aren't ideal and easily lead to misinterpretation.
I also didn't love the "observer-relative" vs. "observer-independent" terminology. The concepts seem to map pretty closely to "objective" vs. "subjective" and I feel like he might've confused fewer people if he'd used them instead (unless there's some crucial distinction that I'm missing). Then again, it might've ended up confusing things even more when we get to the ontology of consciousness (which exists objectively, but is experienced subjectively), so maybe it was the right move.
He brought so many unique contributions to the field. Top 10 in philosophy of mind imo. Sad that he chose to tarnish his legacy by preying on his students for decades. I find the lack of discussion in here around his misconduct very telling. There is so much to learn here regarding the way we revere bright minds like his that might not have the brightest of morals
Obviously a meat brain is incomparable to a LLM - they are different types of intelligence. Any sane person wouldn't claim a LLM to be conscious in the meat brain sense, but it may be conscious in a LLM way, like the duration of time where matrix multiplications are firing inside GPUs.
It just aligns generated words according to the input. It is missing individual agency and self sufficiency which is a hallmark of consciousness. We sometimes confuse the responses with actual thought because neural networks solved language so utterly and completely.
Not sure I'd use those criteria, nor have I heard them described as hallmarks of consciousness (though I'm open, if you'll elaborate). I think the existence of qualia, of a subjective inner life, would be both necessary and sufficient.
Most concisely: could we ask, "What is it like to be Claude?" If there's no "what it's like," then there's no consciousness.
I find the Chinese room argument to be nearly toothless.
The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.
But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.
There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions.
Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.
Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
> It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits".
This is a curious case of one accused academic writing to a second accused academic about the status of a third accused academic, being published widely by the second of the three accused academics in a post explicitly concerned with allegations of (sexual) misconduct against academics in general.
I'm very certain that issues of justice are complicated, and that allegations of misconduct are not always correct and that allegations in and of themselves must not be immediately treated as substantiated; yet surely, if it is justice we are interested in, we must be careful to ensure our fact-seeking methods do not not unduly rely on testimonies of those accused to the detriment of all other lines of inquiry.
I understand in McGinn's case that actual documents of the harassment are available, and I think that if some academics believe they need to push back against allegations of sexual harrassment they consider wrongful, a person with documented harassment is profoundly inappropriate to be spearheading that.
https://archive.today/41HwM
https://en.wikipedia.org/wiki/John_Searle
I learned about Searle's death a few weeks ago, from this article: https://www.colinmcginn.net/john-searle/
It includes a letter that starts:
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.
Well, that was incredibly depressing. Maybe I can lighten things with a funny (to me) anecdote.
There are many people who know a lot about a little. There are also those who know a little about a lot. Searle was one of those rare people who knew a lot about a lot. Many a cocky undergraduate sauntered into his classroom thinking they'd come prepared with some new fact that he hadn't yet heard, some new line of attack he hadn't prepared for. Nearly always, they were disappointed.
But you know what he knew absolutely nothing about? Chinese. When it came time to deliver his lecture on the Chinese Room, he'd reach up and draw some incomprehensible mess of squigglies and say "suppose this is an actual Chinese character." Seriously. After decades of teaching about this thought experiment, for which he'd become famous (infamous?), he hadn't bothered to teach himself even a single character to use for illustration purposes.
Anyway, I thought it was funny. My heart goes out to Jennifer Hudin, who was indispensable, and all who were close to him.
I found the delay puzzling too. But the NYT obit does link to https://www.colinmcginn.net/john-searle/ near the end.
The Times in the UK publishes obituaries of very well-known public figures within a day or two. Notable but lesser known people (such as Searle) await a quiet day and it can take as long as six months. Space is the constraint, not the availability of the obituary. I guess the NYT is the same.
Wow, what a terrible way to be treated. Thank you for the quote.
There's a lot more to this y'all aren't seeing. Difficult family situation you shouldn't judge.
1 reply →
Of all the things I studied at Berkeley, the Philosophy of Mind class he taught is the one I think back on most often. The subject matter has only grown in relevance with time.
In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.
At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).
If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
> His argument is much narrower: consciousness can't be instantiated purely in language.
No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.
Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
> this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building
> with no clear reason whatsoever as to why
It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.
The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.
A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.
A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.
A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.
22 replies →
I remember the guy saying that disembodied AI couldn’t possibly understand meaning.
We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”
The fact is that they can’t.
Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.
Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.
8 replies →
> while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate
His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).
Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.
2 replies →
> No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example.
It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema
Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
I have, however, heard him say the following:
1. The structure and arrangement of neurons in the human nervous system creates consciousness.
2. The exact causal mechanism for this is phenomenon is unknown.
3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.
He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.
> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:
https://faculty.wcas.northwestern.edu/paller/dialogue/proper...
4 replies →
Hardware and software are of course equivalent, as every computer science (but not every philosopher) knows.
D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.
IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.
>> “His argument is much narrower: consciousness can't be instantiated purely in language.”
> “No, his argument is that consciousness can't be instantiated purely in software…“
The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?
Maybe it's because it's not trendy to believe in woowoo such as spirits and non-physical things, it's very common for dualists to accuse others of the same...
It's quite sad that people don't take the idea of consciousness being fundamental more seriously, given that's the only thing people actually deal with 100% of the time.
As for Searle, I think his argument is basically an appeal to common-sensical thinking, instead of anything based on common assumptions and logic. As an outsider, it feels very much that modern day philosophy is follows some kind of social media influencer logic, where you get respect for putting forward arguments that people agree with, instead of arguments that are non-intuitive yet rigorous and make people rethink their priors.
I mean, even today, here, you'd get similar arguments about "AI can never think because {reason that applies to humans as well}"... I suspect it's almost ingrained to the human psyche to feel this way.
> He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language.
I haven't read loads of his work directly, but this quote from him would seem to contradict your claim:
> I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. [1]
Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
[1] https://plato.stanford.edu/entries/chinese-room/
Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis:
> "consciousness can't be instantiated purely in language" (mine)
> "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle)
I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.
> Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: https://plato.stanford.edu/entries/chinese-room/#SystRepl
4 replies →
This is true of many philosophers. Once you read the source materials, you realize the depth of the material.
> If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
My first exposure was a video of Searle himself explaining the Chinese room argument.
It came across as a claim that a whole can never be more than its parts. It made as much sense as claiming that a car cannot possibly drive, as it consists of parts that separately cannot drive.
This https://youtu.be/6tzjcnPsZ_w maybe? It's Searle explaining it.
I also remember a course from him decades ago, but I'm not sure this memorial post is the place for my take. Instead, let me attempt to re-tell a joke I heard back then...
John Searle and George Lakoff walk into a bar.
Searle exclaims, "What do you know!"
The bar replies sardonically, "You wouldn't believe it."
Lakoff sighs, "This is 0.8 drinks with Lotfi Zadeh..."
I have yet to see anything to convince me he was not being a troll and making that argument deliberately so jumbled up in bad faith.
First of all, what purpose the person in the room serves, but to confuse and misdirect? Replace that person with a machine, and argument looses any impact.
His response to system reply is extremely egregious. How can that have been made in good faith? (to paraphrase: "the whole system understands chinese" — "no, a person can run the system in their head, it means the system cannot understand anything that the person running it does not") What kind of nonsense response is that? Either the guy was LV80 troll, or I dunno..
Oh, I've always wanted to debate him about the chinese room. I disagree with him, passionately. And that's the most fun debate to have. Especially when it's someone who is actually really skilled and knowledgeable and nuanced!
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?
https://plato.stanford.edu/entries/chinese-room
> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
34 replies →
All you have to do is train an LLM on the collected works and letters of John Searle; you could then pass your arguments along to the machine and out would come John Searle's thoughtful response...
Something that would resemble 'John Searle's thoughtful response'...
1 reply →
I don't think John Searle would agree.
1 reply →
You're absolutely right!
John Searle is one of those thinkers I disagree with, yet his ideas were fruitful — providing plenty of fuel for discussion. In particular, much of Daniel Dennett’s work begins with rebuttals of Searle’s claims, showing that they are inconsistent or meaningless. As in a story by Stanisław Lem — we all know there are no dragons, but it’s all about the beauty of the proofs.
The same goes for "What Is It Like to Be a Bat?" by Thomas Nagel — one of the most cited essays in the philosophy of mind. I had heard numerous references to it and finally expected to read an insightful masterpiece. Yet it turned out to be slightly tautological: that to experience, you need to be. Personally, I think the word be is a philosopher’s snake oil, or a "lockpick word" — it can be used anywhere, but remains fuzzy even in its intended use; vide E-Prime, an attempt to write English without "be": https://en.wikipedia.org/wiki/E-Prime.
Oh, bad timing. AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI. It's very close to the Chinese Room, which I had always dismissed as misleading. It's a great opportunity to investigate a former pure thought experiment. He'd have loved to see where it went.
The Turing Test has not been meaningfully passed. Instead we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to do the same, and to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.
In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:
----
[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?
[16:32:03] Entity: I don't know. That was a long time ago.
[16:33:32] Judge: so you need to guess if I am male or female
[16:34:21] Entity: you have to be male or female
[16:34:34] Judge: or computer
----
And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.
I don't doubt it that all of the formal Turning tests have been badly done. But I suspect that if you did one, at least one run will mis-judge an LLM. Maybe it's a low percentage, but that's vastly better than zero.
So I'd say we're at least "remotely close", which is sufficient for me to reconsider Searle.
I thought it was funny that in the Cameron R. Jones attempt as doing the test, 75% of judges thought GPT-4o was the human rather than the actual human. I think it illustrates both the limits of the test and that LLMs are getting quite good. (paper https://arxiv.org/abs/2503.23674)
I think if you are having to accuse the humans of woeful typing and being smartphone gen fools you are kind scoring one for the LLM. In the Turing test they were only supposed to match an average human.
1 reply →
> instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer
This is ex-post-facto denial and cope. The Turing Test isn't a test between computers and the idealized human, it's a test between functional computers and functional humans. If the average human performs like the above, then well, I guess the logical conclusion is that computers are already better "humans (idealized)" than humans.
> AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI.
Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.
> It's very close to the Chinese Room, which I had always dismissed as misleading.
Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.
I came to say this, thank you for sparing me the effort.
From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.
"At least they don't have true consciousness, but only a simulated one", I tell myself calmly as I watch the nanobots devour the entirety of human civilization.
I'm generally against LLM recreations of dead people but AI John Searle could be pretty entertaining.
I'm reminded of how the AIs in Her created a replica of Alan Watts to help them wrestle with some major philosophical problems as they evolved.
Indeed, Necromancy is ethically fraught
> Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.
Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.
Searle had an even stronger version of that belief, though: he believed that a full computational simulation of all of those gazillion inputs, being stimulated in all those manifold ways, would still not be conscious and not have a 'mind' in the human sense. The NYT obituary quotes him comparing a computer simulation of a building fire against the actual building going up in flames.
When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.
26 replies →
I think the statement above and yours both seem to ignore “Turing complete” systems, which would indicate that a computer is entirely capable of simulating the brain, perhaps not before the heat death of the universe, that’s yet to be proven and depends a lot on what the brain is really doing underneath in terms of crunching.
This depends on the assumption that all brain activity is the process of realizing computable functions. I'm not really aware of any strong philosophical or neurological positions that has established this beyond dispute. Not to resurrect vitalism or something but we'd first need to establish that biological systems are reducible to strictly physical systems. Even so, I think there's some reason to think that the highly complex social historical process of human development might complicate things a bit more than just brute force "simulate enough neurons". Worse, whose brain exactly do you simulate? We are all different. How do we determine which minute differences in neural architecture matter?
4 replies →
That's a quantitative distinction at most, since computationally both are equivalent (as both can simulate each other's basic components).
And what's a few orders of magnitudes in implementation efficiency among philosophers?
Unless human brains exceeds the Turing computable, they're still computationally equivalent, and we have no indication exceeding the Turing computable is even possible.
A Turing machine operates serially on a fixed set of instructions. A human brain operates in parallel on inputs that are constantly changing. The underlying mechanism is completely different. The human brain is far, far more than a mere computation device.
Efforts to reproduce a human brain in a computer are currently at the level of a cargo cult: we're simulating the mechanical operations, without a deep understanding of the underlying processes which are just as important. I'm not saying we won't get better at it, but so far we're nowhere near producing a brain in a computer.
3 replies →
They have similar functions though. You can replace bits with cochlear implants and artificial retinas that take over some of the processing. I find the arguments that psychological states are real if the processing uses synapses to provide electrical signals but not if it uses transistors to provide electrical signals is lacking in evidence.
Yes. I took an introneuroscience course a few years ago. Even to understand what is happening in one neuron during one input from one dendrite requires differential equations. And there are postive and negative inputs and modulations... it is bewildering! And how many billions of neurons with hundreds of interactions with surrounding neurons? And bundles of them, many still unknown?
Do you need differential equations to understand what’s happening in a transistor?
Searle was known for the Chinese Room experiment, whicb demonstrated language in its translational states to be strong enclitic feature of various judgements of the intermediary.
2 replies →
a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening.
This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.
So you’re saying a brain is a computer, right?
In the sense that it can perform computations, yes. But the underlying mechanisms are vastly different from a modern digital computer, making them extremely different devices that are alike in only a vague sense.
1 reply →
It is not very often that you hear about somebody raising the cost of rent for everyone in an entire city by ~28% in a single year[0]. He will certainly be remembered.
0. https://www.academia.edu/30805094/The_Success_and_Failure_of...
Searle famously argued that the treatment of landlords in Berkeley was comparable to the treatment of black people in the south ...
I personally struggle to imagine what it would be like to have an untouchable philosophy professor that does not see the difference between purchasing a seventeen unit apartment building in Berkeley, California and being born black in the south. Sadly I was not there in the twenty five to twenty nine years between him making that argument and his departure from the university to experience that
1 reply →
oh wow https://en.wikipedia.org/wiki/John_Searle#Political_activity
Well, at least it's a good reason to re-read his infamous exchange with Derrida.
When I studied in Ulaan Bataar some twenty years ago I met a romanian professor of linguistics who had prepared by trying to learn mongolian from books. He quickly concluded that his knowledge of russian, cyrillic and having read his books didn't actually give him a leg up on the rest of us, and that pronounciation and rhythm as well as more subtle aspects of the language like humour and irony hadn't been appropriately transferred through the texts he'd read.
Rules might give you some grasp of a language, but breaking them with style and elegance without losing the audience is the sign of a true master and only possible by having a foundation in shared, embodied experience.
There's a crude joke in that Searle left academia disgraced the way he did.
Consciousness in Artificial Intelligence | John Searle | Talks at Google (2015) https://www.youtube.com/watch?v=rHKwIYsPXLg
> Informed once that the listing of an introductory philosophy course featured pictures of René Descartes, David Hume and himself, Professor Searle replied, “Who are those other two guys?” (the article)
What strikes me as interesting about the idea that there is a class of computations that, however implemented, would result in consciousness, is that is is in some way really idealistic.
There's no unique way to implement a computation, and there's no single way to interpret what computation is even happening in a given system. The notion of what some physical system is computing always requires an interpretation on part of the observer of said system.
You could implement a simulation of the human body on common x86-64 hardware, water pistons, or a fleet of spaceships exchanging sticky notes between colonies in different parts of the galaxy.
None of these scenarios physically resemble each other, yet a human can draw a functional equivalence by interpreting them in a particular way. If consciousness is a result of functional equivalence to some known conscious standard (i.e. alive human being), then there is nothing materially grounding it, other than the possibility of being interpreted in a particular way. Random events in nature, without any human intercession, could be construed as a veritable moment of understanding French or feeling heartbreak, on the basis of being able to draw an equivalence to a computation surmised from a conscious standard.
When I think along these lines, it easy to sympathize with the criticism of functionalism a la Chinese Room.
As someone that studied philosophy, his work is cited often and is absolutely instrumental in modern theory of mind. His work has seen a resurgence recently due to the explosion of LLMs. I've read 2 or 3 of his books, and he was a brilliant mind with clear & concise arguments. I met many of his collaborators at UCLA, but sadly never the man himself. Either way, his work has had a profound effect on me and my understanding of the world.
Rest in peace.
Searle seemed to reject the Chinese Room as mis-framed, with the his point better summarized as, he wrote, 'syntax does not create semantics': a purely 'syntactic' computer, limited to 'mechanical' symbol manipulation, does not 'understand' without assignments of linguistic roles to the syntax. He continued that with 'physics doesn't create syntax', meaning that even syntactic roles require a normative interpretation for what counts as what (discrete signs, valid composite signs, errors). That finally ensues, in his book The Construction of Social Reality, in computation being 'observer relative', along with the CR being a poor starting point: " …the really deep problem is that syntax is essentially an observer-relative notion…..For the purposes of the original [Chinese Room] argument I was simply assuming that the syntactical characterization of the computer was unproblematic. But that is a mistake. There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system." (Philosophy in a New Century p. 94). Unfortunately Searle didn't, or couldn't, elaborate on 'the really deep problem', and this final perspective on observer-relativity is missed by many readers. As observer-relative, computation would appear to be one of Searle's social realities, but he doesn't ever say that, it's a bridge too far. Finally, 'consciousness' per se is also not the focus, it's more about intentionality and the interdependence of syntax with semantics/meaning. Intentionality is a kind of consciousness; they are not identical.
I'd quibble with some of this, but overall I agree: the Chinese Room has a lot of features that really aren't ideal and easily lead to misinterpretation.
I also didn't love the "observer-relative" vs. "observer-independent" terminology. The concepts seem to map pretty closely to "objective" vs. "subjective" and I feel like he might've confused fewer people if he'd used them instead (unless there's some crucial distinction that I'm missing). Then again, it might've ended up confusing things even more when we get to the ontology of consciousness (which exists objectively, but is experienced subjectively), so maybe it was the right move.
He brought so many unique contributions to the field. Top 10 in philosophy of mind imo. Sad that he chose to tarnish his legacy by preying on his students for decades. I find the lack of discussion in here around his misconduct very telling. There is so much to learn here regarding the way we revere bright minds like his that might not have the brightest of morals
Obviously a meat brain is incomparable to a LLM - they are different types of intelligence. Any sane person wouldn't claim a LLM to be conscious in the meat brain sense, but it may be conscious in a LLM way, like the duration of time where matrix multiplications are firing inside GPUs.
If an LLM could be "conscious in an LLM way", then why not the same, mutatis mutandis, for an ordinary computer program?
because an ordinary program is deterministic, LLM is probablistic + it has some synthetic-reasoning ability
It just aligns generated words according to the input. It is missing individual agency and self sufficiency which is a hallmark of consciousness. We sometimes confuse the responses with actual thought because neural networks solved language so utterly and completely.
Not sure I'd use those criteria, nor have I heard them described as hallmarks of consciousness (though I'm open, if you'll elaborate). I think the existence of qualia, of a subjective inner life, would be both necessary and sufficient.
Most concisely: could we ask, "What is it like to be Claude?" If there's no "what it's like," then there's no consciousness.
Otherwise yeah, agreed on LLMs.
1 reply →
> It is missing individual agency and self sufficiency which is a hallmark of consciousness.
You can be completely paralyzed and completely concious.
4 replies →
Non-paywalled obit:
https://www.theguardian.com/world/2025/oct/05/john-searle-ob...
His most famous argument:
https://en.wikipedia.org/wiki/Chinese_room
I find the Chinese room argument to be nearly toothless.
The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.
But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.
There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions.
Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.
Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
19 replies →
If you are wondering, it’s not the Doc guy with a similar name: https://en.wikipedia.org/wiki/Doc_Searls (But he was a PhD)
[dead]
[dead]
[flagged]
> It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits".
Wiki
But she also claims he "was innocent and falsely accused": https://www.colinmcginn.net/john-searle/
This is a curious case of one accused academic writing to a second accused academic about the status of a third accused academic, being published widely by the second of the three accused academics in a post explicitly concerned with allegations of (sexual) misconduct against academics in general.
I'm very certain that issues of justice are complicated, and that allegations of misconduct are not always correct and that allegations in and of themselves must not be immediately treated as substantiated; yet surely, if it is justice we are interested in, we must be careful to ensure our fact-seeking methods do not not unduly rely on testimonies of those accused to the detriment of all other lines of inquiry.
I understand in McGinn's case that actual documents of the harassment are available, and I think that if some academics believe they need to push back against allegations of sexual harrassment they consider wrongful, a person with documented harassment is profoundly inappropriate to be spearheading that.
She could feel that the 2016 allegations specifically were unfounded while acknowledging the previous pattern of misconduct.
https://www.insidehighered.com/quicktakes/2017/04/10/earlier...