← Back to context

Comment by brookst

15 hours ago

These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?

  • You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.

    • Ya, I seem to largely agree with your comments on this article. I was replying to brookst, did you mean to reply on a differnt thread?

Sensory input is nothing but data.

  • That's just reductive semantics. Anything can be described as "nothing but data".

    • Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.

      6 replies →

    • How do you imagine a brain can distinguish data from a real sense and data from another source?

Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.

  • No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

    Conflating senses with cognitive awareness of sensory input is a mistake.

    • We don't have a way of measuring "cognitive awareness" though. We have a way of measuring electrical impulses, and how they behave in response to various treatments (eg anaesthetics or magnetic fields), but we can't objectively measure whether the system is aware at all.

      We can measure electrical spikes, and we can ask the system to reply what it experiences when various spikes occur. Guess what: we can do that with ANNs now too.

      It'd be one thing if this were all a philosophical discussion, but in this thread so many folks are making very firm statements about the nature of reality we have no means to back up.

    • The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?

    • I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

      Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.