Comment by Symmetry

5 days ago

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

There is more to this quote than you might think.

Grammatically, in English the verb "swim" requires an "animate subject", i.e. a living being, like a human or an animal. So the question of whether a submarine can swim is about grammar. In Russian (IIRC), submarines can swim just fine, because the verb does not have this animacy requirement. Crucially, the question is not about whether or how a submarine propels itself.

Likewise, in English at least, the verb "think" requires an animate object. the question whether a machine can think is about whether you consider it to be alive. Again, whether or how the machine generates its output is not material to the question.

  • I don't think the distinction is animate/inanimate.

    Submarines sail because they are nautical vessels. Wind-up bathtub swimmers swim, because they look like they are swimming.

    Neither are animate objects.

    In a browser, if you click a button and it takes a while to load, your phone is thinking.

He was famously (and, I'm realizing more and more, correctly) averse to anthropomorphizing computing concepts.

I disagree. The question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI (I myself doubt this).

I think it is, though, because it challenges our belief that only biological entities can think, and thinking is a core part of our identity, unlike swimming.

  • > our belief that only biological entities can think

    Whose belief is that?

    As a computer scientist my perspective of all of this is as different methods of computing and we have a pretty solid foundations on computability (though, it does seem a bit frightening how many present-day devs have no background in the foundation of the Theory of Computation). There's a pretty common naive belief that somehow "thinking" is something more or distinct from computing, but in actuality there are very few coherent arguments to that case.

    If, for you, thinking is distinct from computing then you need to be more specific about what thinking means. It's quite possible that "only biological entities can think" because you are quietly making a tautological statement by simply defining "thinking" as "the biological process of computation".

    > thinking is a core part of our identity, unlike swimming.

    What does this mean? I'm pretty sure for most fish swimming is pretty core to its existence. You seem to be assuming a lot of metaphysically properties of what you consider "thinking" such that it seems nearly impossible to determine whether or not anything "thinks" at all.

    • One argument for thinking being different from computing is that thought is fundamentally embodied, conscious and metaphorical. Computing would be an abstracted activity from thinking that we've automated with machines.

      1 reply →

  • The point is that both are debates about definitions of words so it's extremely boring.

    • They can be made boring by reducing them to an arbitrary choice of definition of the word "thinking", but the question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI.

      6 replies →

What an oversimplification. Thinking computers can create more swimming submarines, but the inverse is not possible. Swimming is a closed solution; thinking is a meta-solution.

  • Then the interesting question is whether computers can create more (better?) submarines, not whether they are thinking.

  • I think you missed the point of that quote. Birds fly, and airplanes fly; fish swim but submarines don't. It's an accident of language that we define "swim" in a way that excludes what submarines do. They move about under their own power under the water, so it's not very interesting to ask whether they "swim" or not.

    Most people I've talked to who insist that LLMs aren't "thinking" turn out to have a similar perspective: "thinking" means you have to have semantics, semantics require meaning, meaning requires consciousness, consciousness is a property that only certain biological brains have. Some go further and claim that reason, which (in their definition) is something only human brains have, is also required for semantics. If that's how we define the word "think", then of course computers cannot be thinking, because you've defined the word "think" in a way that excludes them.

    And, like Dijkstra, I find that discussion uninteresting. If you want to define "think" that way, fine, but then using that definition to insist LLMs can't do a thing because it can't "think" is like insisting that a submarine can't cross the ocean because it can't "swim".

    • Then you're missing the point of my rebuttal. You say submarines don't swim [like fish] despite both moving through water, the only distinction is mechanism. Can AI recursively create new capabilities like thinking does, or just execute tasks like submarines do? That's the question.

      1 reply →

  • That’s a great answer to GP’s question!

    • It's also nonsense. (Swimming and thinking are both human capabilities, not solutions to problems.)

      But of course here we are back in the endless semantic debate about what "thinking" is, exactly to the GP's (and Edsger Dijkstra's) point.

      2 replies →