Comment by int_19h

2 days ago

> If you ask an LLM if it's conscious it will usually say no, so QED?

FWIW that's because they are very specifically trained to answer that way during RLHF. If you fine-tune a model to say that it's conscious, then it'll do so.

More fundamentally, the problem with "asking the LLM" is that you're not actually interacting with the LLM. You're interacting with a fictional persona that the LLM roleplays.

> More fundamentally, the problem with "asking the LLM" is that you're not actually interacting with the LLM. You're interacting with a fictional persona that the LLM roleplays.

Right. That's why the text output of an LLM isn't at all meaningful in a discussion about whether or not it's conscious.