← Back to context

Comment by wizzwizz4

19 hours ago

The language model's output would be informed by its weights, not by its experiences as wetware. Substrate does not make a computation special: that's the whole point of the Chinese Room thought experiment.

What mechanism are you imagining that would allow a LLM built of neurons to describe what it's like to be made of neurons, when a LLM built of GPUs cannot describe what it's like to be organised sand? The LLM in the GPU cluster is evaluated by performing the same calculations that could be performed by intricate clockwork, or very very slowly by generations of monks using pencil and paper. Just as the monks have thoughts and feelings, it is conceivable (though perhaps impossible) that the brain tissue implementing a LLM has conscious experience; but if so, that experience would not be reflected in the LLM's output.

When I say language model, I mean of whatever form would make it native to the wetware medium. This brings with it a few key distinctions. The distinction I think is most relevant is that human neurons including in chips like the CL1 have the capability to dynamically re-organise topologically (i.e. neuroplasticity) which is something that computed LLMs can't do, which have a fixed structure with weights.

We can't assume that a computer based neural network will have the same emergent behaviours as a biological one or vice versa.

The interesting point for me is in the neuroplasticity, because it implies that the networks which are specialised for language could start forming synapses which connect them to the parts which are more specialised to play doom giving rise to the possibility that this could be used for introspection