← Back to context

Comment by staticassertion

11 days ago

I'm not sure I understand your question but I'll try to answer as best I can - also keep in mind that this is simply one view. The structure of the brain encodes information based on experience in the same way that the force of gravity encodes information on two rocks that collide, or other physical forces encode information into chemical structures, etc.

In the case of the brain the encoding is such that various functions "fall out of" it, like being able to relate experiences, etc.

There's no magic proposed here, this is a physicalist functionalist view.

Nothing about this prevents a computer from being sentient. As I said, none of this even matters. The key premise is that LLMs are trained on language and not experiences. Unless you believe that a description of an experience is identical to the phenomenal experience, then we agree on the key premise. Do you think that they are identical?