Comment by shinycode

8 hours ago

Yeah you’re raising three good points and they all land. On the finetuned LLM: you’re right, that criterion was flawed. A system trained to report experience proves nothing about whether experience is present, which is actually the core of the hard problem. No behavioral output alone can confirm inner experience. That applies to LLMs, and technically to other humans too. On dogs, also a fair correction. We don’t actually require verbal report to attribute consciousness to animals, we use behavioral and physiological evidence. So "coherent verbal report" was too narrow.

Better criterion: a system whose overall architecture and behavior is consistent with experience, not just one that says the right words.

On the standard of proof: that was a rhetorical deflection and you’re right to call it out. You asked a genuine question and got it turned back on you. And you’re pointing at something real: in science, strong correlation is not accepted as proof when stricter evidence is achievable. The reason we settle for correlation here isn’t because it’s sufficient, it’s because subjective experience may make stronger proof structurally inaccessible. But it’s also worth noting that scientific consensus has a poor track record of admitting this honestly. Dominant paradigms tend to defend themselves long past the point where the cracks are visible, physicalism on consciousness is no exception. The confidence with which emergence is presented often reflects institutional momentum as much as evidence.