← Back to context

Comment by captainbland

17 hours ago

When I say language model, I mean of whatever form would make it native to the wetware medium. This brings with it a few key distinctions. The distinction I think is most relevant is that human neurons including in chips like the CL1 have the capability to dynamically re-organise topologically (i.e. neuroplasticity) which is something that computed LLMs can't do, which have a fixed structure with weights.

We can't assume that a computer based neural network will have the same emergent behaviours as a biological one or vice versa.

The interesting point for me is in the neuroplasticity, because it implies that the networks which are specialised for language could start forming synapses which connect them to the parts which are more specialised to play doom giving rise to the possibility that this could be used for introspection