← Back to context

Comment by Davidzheng

1 day ago

I basically agree with you. In the first point I mean that if it is possible to tell whether a being is conscious or not from the text it produces, then eventually the machine will, by imitating the distribution, emulate the characteristics of the text of conscious beings. So if consciousness (assuming it's reflected in behavior at all) is essential to completing some text task it must be eventually present in your machine when it's similar enough to a human.

Basically if consciousness is useful for any text task, i think machine learning will create it. I guess I assume some efficiency of evolution for this argument.

Wrt length generalization. I think at the order of say 1M tokens it kind of stops mattering for the purpose of this question. Like one could ask about its consciousness during the coherence period.

I guess logically one needs to assume something like if you simulate the brain completely accurately the simulation is conscious too. Which I assume bc if false the concept seems outside of science anyway.

  • Let's imagine a world where we could perfectly simulate a rock floating through space, it doesn't then follow that this rock would then generate a gravitational field. Of course, you might reply "it would generate a simulated gravitational field in the simulation", if that were true, we would be able to locate the bits of information that represent gravity in the simulation. Thus, if a simulated brain experiences simulated consciousness, we would have clear evidence of it in the simulation - evidence that is completely absent in LLMs