Comment by IanCal

2 months ago

Of course it’s the point, the systems are not distinguishable by behaviour only what’s inside them. There are not tests to determine what’s inside, otherwise the whole thing is pointless.

This was why I have the tin of beans comparison.

The room has the property X if and only if there’s a tin of beans inside. You can’t in any way tell the difference between a room that has a tin of beans in and one that doesn’t without looking inside.

You might find that a property that has zero predictive power, makes (by definition) no difference to what either room can do, and has no use for any practical purposes (again by definition) is rather pointless. I would agree.

Searle has a definition of understanding that, to me, cannot be useful for any actual purpose. It is therefore irrelevant to me if any system has his special property just as my tin of beans property is useless.

Again, it’s not an epistemological test. In reality the material difference between a computing machine and a brain is trivial. It’s showing there’s a categorical difference between the two. BTW—ethically it matters a great deal. If one system is conscious or another, that gives it moral status. Among other practical differences such as guarantee of function over long term.

  • And again you assign a property or not to things that perform indistinguishably. Your definition is useless. It may as well be based on the tin of beans.

    > In reality the material difference between a computing machine and a brain is trivial

    No it isn’t. You are making the strong statements about how the brain works that you argued against at the start.

    > Among other practical differences such as guarantee of function over long term.

    Once again ignoring the setup of the argument. The solution to the chinese room isn’t “the trick is to wait long enough”.

    I don’t know why you want to argue about this given you so clearly reject the entire concept of the thought experiment.

    I find the entire thing to be intellectual wankery. A very simple and ethical solution is that if two things appear conscious from the outside then just treat them both as such. Job done. I don’t need to find excuses like “ah but inside there’s a book!” Or “it’s manipulations are on the syntactic level if we just look inside” or “but it’s just valves!” I can simply not mistreat anything that appears conscious.

    All of this feels like a scared response to the idea that maybe we’re not special.

    • Ok, things are getting a little heated and personal so I'll attempt to engage one more time in good faith.

      The premise of the argument is that the Chinese Room passes the Turing Test for Chinese. There are two possibilities for how this happens: 1) the program emulates the brain and has the right relation to the external world more or less exactly, or 2) the program emulates the brain enough to pass the test in some context but fails to emulate the brain perfectly. We know that as it currently stands, we've "passed the Turing Test" but we do not go further and say that brains and AI perform "indistinguishably." Unless there are significant similarities to how brains work and how AIs work, on some fundamental level (case 1), even if they pass the Turing Test, it is possible that in some unanticipated scenario they will diverge significantly. Imagine a system that outputs digits of pi. You can wait until you see enough digits to be satisfied, but unless you know what's causing the output, you can never be sure that you're not witnessing the output of some rational approximation or some cached calculation that will eventually halt. What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point. Leaving that aside, assuming that you do have 1), which I believe we are still very far from, we're still left with the ethical consequences, which it seems you agree does hinge on whether the system is conscious.

      You made a really strong claim, which is "I can simply not mistreat anything that appears conscious"--which is showing the difference in our intuitions. We are not beholden to the setup of the Chinese Room. The current scientific and rational viewpoint is at the very least that brains cause minds and they cause our mental world. I'm sure you agree with that. The very point we are disputing is that it doesn't follow that because what's going on on the outside is the same that what goes on on the inside doesn't matter. This is particularly true if we have clear evidence that the things causing the behavior are very different, that one is a physical system with biological causes and the other is a kind of simulation of the first. So when I say that a brain is trivially different from a calculating machine, what I mean is that the brain simply has different physical characteristics from a calculating machine. Maybe you disagree that those differences are relevant but they are, you will agree, obvious. The ontology of a computer program is that it is abstract and can be implemented in any substrate. What you are saying then, in principle, is that if I follow the steps of a program by tracking bits on a page that I'm marking manually, that somehow the right combination of bits (that decode to an insult) is just as morally bad as me saying those words to another human. I think many would find that implausible.

      But there are some who hold this belief. Your position is called "ethical behaviorism," and there's a essay I argued against that articulated this viewpoint. You can read it if you want! https://blog.practicalethics.ox.ac.uk/2023/03/eth%C2%ADi%C2%...

      3 replies →

Most elegant tin of beans I've seen in a while.

If I understand your argument: if there's no empirical consequence, what's the point of the distinction, right?

  • lol. Imagine a husband arguing to his wife: if you can't tell that I'm cheating on you, what's the point of the distinction of faithful vs. not?

    • @Kim_Bruning The point of the experiment is that there is some opaque boundary where the behavior is indistinguishable--that's the empirical stance of behaviorists, what goes on inside "doesn't matter." The empirical boundary of a husband and wife might be home life and time together. If you "pierce" the Chinese Room, you see a guy with an exotic setup. If you pierce a native speaker, you see a brain that electrochemical that has microtubules that collapse the wave function (or whatever), just like YOU have, and YOU know you understand (at least relative to English)...these are VERY different things even if they are, externally, yielding the same behavior. So yes, you could hire a private detective and so-on, but the whole point of the "empirically indistinguishable" is that it is empirically indistinguishable relative to some boundary (hence, room). If the Chinese Room was TRULY empirically indistinguishable, then inside it would be a human producing Chinese, not a non-native speaker and a program.

      btw--if you'd like to keep the conversation going, email is on my personal webpage in my bio.

    • You elided the word "Empirical". Say his wife made it empirically as water-tight as she can: for instance she hires a PI who follows him 24/7. The PI finds nothing out of the ordinary. How is this even still cheating?

      Maybe he was cheating before or after, sure, but not during. No court would buy that.

      ...At least, that's how I interpret 'empirical consequence' - something observable or detectable, at very least in principle. Do you mean something different?

      (Right this minute I'm coming from an empiricist framework where acts require consequences. If you're approaching this from a realist or rationalist view -which I suspect-, I'd be interested to hear it!)