← Back to context

Comment by siglesias

3 hours ago

First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference? It's not clear that the characteristic feature of the brain is only to compute in the classical sense. "Understanding," if it leverages quantum mechanics, might also create a guarantee of being here and now (computers and programs have no such guarantees). This is conjecture, but it's meant to stimulate imagination. What we need to get away from is the fallacy that a causal reduction of mental states to "electrical phenomena" means that any set of causes (or any substrate) will do. I don't think that follows.

Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).