Comment by siglesias

2 months ago

Again, it’s not an epistemological test. In reality the material difference between a computing machine and a brain is trivial. It’s showing there’s a categorical difference between the two. BTW—ethically it matters a great deal. If one system is conscious or another, that gives it moral status. Among other practical differences such as guarantee of function over long term.

And again you assign a property or not to things that perform indistinguishably. Your definition is useless. It may as well be based on the tin of beans.

> In reality the material difference between a computing machine and a brain is trivial

No it isn’t. You are making the strong statements about how the brain works that you argued against at the start.

> Among other practical differences such as guarantee of function over long term.

Once again ignoring the setup of the argument. The solution to the chinese room isn’t “the trick is to wait long enough”.

I don’t know why you want to argue about this given you so clearly reject the entire concept of the thought experiment.

I find the entire thing to be intellectual wankery. A very simple and ethical solution is that if two things appear conscious from the outside then just treat them both as such. Job done. I don’t need to find excuses like “ah but inside there’s a book!” Or “it’s manipulations are on the syntactic level if we just look inside” or “but it’s just valves!” I can simply not mistreat anything that appears conscious.

All of this feels like a scared response to the idea that maybe we’re not special.

  • Ok, things are getting a little heated and personal so I'll attempt to engage one more time in good faith.

    The premise of the argument is that the Chinese Room passes the Turing Test for Chinese. There are two possibilities for how this happens: 1) the program emulates the brain and has the right relation to the external world more or less exactly, or 2) the program emulates the brain enough to pass the test in some context but fails to emulate the brain perfectly. We know that as it currently stands, we've "passed the Turing Test" but we do not go further and say that brains and AI perform "indistinguishably." Unless there are significant similarities to how brains work and how AIs work, on some fundamental level (case 1), even if they pass the Turing Test, it is possible that in some unanticipated scenario they will diverge significantly. Imagine a system that outputs digits of pi. You can wait until you see enough digits to be satisfied, but unless you know what's causing the output, you can never be sure that you're not witnessing the output of some rational approximation or some cached calculation that will eventually halt. What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point. Leaving that aside, assuming that you do have 1), which I believe we are still very far from, we're still left with the ethical consequences, which it seems you agree does hinge on whether the system is conscious.

    You made a really strong claim, which is "I can simply not mistreat anything that appears conscious"--which is showing the difference in our intuitions. We are not beholden to the setup of the Chinese Room. The current scientific and rational viewpoint is at the very least that brains cause minds and they cause our mental world. I'm sure you agree with that. The very point we are disputing is that it doesn't follow that because what's going on on the outside is the same that what goes on on the inside doesn't matter. This is particularly true if we have clear evidence that the things causing the behavior are very different, that one is a physical system with biological causes and the other is a kind of simulation of the first. So when I say that a brain is trivially different from a calculating machine, what I mean is that the brain simply has different physical characteristics from a calculating machine. Maybe you disagree that those differences are relevant but they are, you will agree, obvious. The ontology of a computer program is that it is abstract and can be implemented in any substrate. What you are saying then, in principle, is that if I follow the steps of a program by tracking bits on a page that I'm marking manually, that somehow the right combination of bits (that decode to an insult) is just as morally bad as me saying those words to another human. I think many would find that implausible.

    But there are some who hold this belief. Your position is called "ethical behaviorism," and there's a essay I argued against that articulated this viewpoint. You can read it if you want! https://blog.practicalethics.ox.ac.uk/2023/03/eth%C2%ADi%C2%...

    • I have been engaging in good faith but to be honest am a little frustrated at having to continually point out what the actual chinese room thought experiment is. I think you have continually made a very important error with it.

      > What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point

      And yet entirely unrelated to this thought experiment. His point is not that the book isn't big enough, that the man inside the room will trip up at some point, or anything of the sort.

      Now you might have a different argument about this all than Searle, and that's entirely fine. I'm saying that Searles definition of understanding is utterly pointless because he defines it as one that is not related to the measurable actions of a system but related to the way in which it works internally.

      > The premise of the argument is that the Chinese Room passes the Turing Test for Chinese.

      ...

      > enough to pass the test in some context but fails to emulate the brain perfectly

      No. That is a far weaker argument than Searle makes. His argument is not that it'll be hard to tell, or convincing but you can tell the difference, or most people would be fooled.

      From Searle, let's dig into this.

      https://web.archive.org/web/20071210043312/http://members.ao...

      > from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.

      Already we get to the point of being indistinguishable.

      > I have inputs and outputs that are indistinguishable from those of the native Chinese speaker,

      Again indistinguishable.

      And then he doubles down on this to the point of fully emulating the brain not being enough

      > imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.

      Searle has a problem - he looks at two different systems and says there is understanding in one and not in another. Then he ties himself in knots trying to distinguish between the two.

      > The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.

      He cannot at all accept any sort of combination, he can't accept any concept of understanding being anything but binary. He cannot accept that it perhaps is not a useful term at all.

      > in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed

      A programmed system *cannot* understand. It doesn't matter how it operates or how well, and again duplicating the capabilities of a real person.

      As far as I can tell, since he leans heavily into the physical aspect, if we had two machines:

      1. Inputs are received via whatever sensors, go through a physical set of components, and drive motors/actuators

      2. Inputs are received via whatever sensors, go through a chip running an exact simulation of those same components, and drive motors/actuators

      then machine 1 could understand but machine 2 could not because it has a program running rather than just being a physical thing.

      Despite the fact that both simply follow the laws of physics, the very concept of a program is just how certain physical things are arranged.

      To go back to my point because I'm rather frustrated yet again just pointing out what Searle explicitly says:

      Searle defines understanding in a way that makes it, to me, entirely useless. It provides by definition no predictive power and can by definition not impact anything we want to do.

      I am not arguing which of these things understands. I'm saying the term as a whole isn't very useful, and Searles definition has been pushed by him to a point of being entirely useless because he starts by insisting that certain things cannot understand.

      1 reply →