← Back to context

Comment by fc417fc802

19 hours ago

Those are just words inside arbitrary tags, they aren't actually thoughts. Think of it as asking the model to role play a human narrating his internal thought process. The exercise improves performance and can aid in human understanding of the final output but it isn't real.

What would be different if it was "real"? What makes you think that when humans "narrate" "their" "internal thought process", it's any more "real"?

Why do you believe that humans have access to an “internal thought process”? I.e. what do you think is different about an agent’s narration of a thought process vs. a human’s?

I suspect you’re making assumptions that don’t hold up to scrutiny.

  • I made no such claim and I don't understand what direct relevance you believe the human thought process has to the issue at hand.

    You appear to be defaulting to the assumption that LLMs and humans have comparable thought processes. I don't think it's on me to provide evidence to the contrary but rather on you to provide evidence for such a seemingly extraordinary position.

    For an example of a difference, consider that inserting arbitrary placeholder tokens into the output stream improves the quality of the final result. I don't know about you but if I simply repeat "banana banana banana" to myself my output quality doesn't magically increase.

    • Given that LLMs can speak basically any language and answer almost any arbitrary question much like a human would, the claim that LLMs have comparable (not identical) thought processes to humans does not seem extraordinary at all.

  • Are you legitimately arguing that humans don’t have an internal thought process in some way?