Comment by ACCount36
6 months ago
"Not conscious" is a silly claim.
We have no agreed-upon definition of "consciousness", no accepted understanding of what gives rise to "consciousness", no way to measure or compare "consciousness", and no test we could administer to either confirm presence of "consciousness" in something or rule it out.
The only answer to "are LLMs conscious?" is "we don't know".
It helps that the whole question is rather meaningless to practical AI development, which is far more concerned with (measurable and comparable) system performance.
Now we have.
https://github.com/dmf-archive/IPWT
https://dmf-archive.github.io/docs/posts/backpropagation-as-...
But you're right, capital only cares about performance.
https://dmf-archive.github.io/docs/posts/PoIQ-v2/
This looks to me like the usual "internet schizophrenics inventing brand new theories of everything".