← Back to context

Comment by wan23

20 hours ago

The simulacrum of a thing is a simulacrum of the thing though. LLMs are trained to simulate human thinking, and while their thought process is not the same, you can't say for sure that the thinking output is not necessary for their thought process to end up in the place where a human thought process would end up. If the "Interesting!" token(s) wasn't there, for all you know it would have gone down a completely different path.