Comment by simianwords
2 days ago
There are a lot of words but it feels like you have never really used LLM's (apologies for the bluntness).
We see LLM's introspecting all the time[1].
>Notably, DeepSeek-AI et al. report that the average response length and downstreamperformance of DeepSeek-R1-Zero increases as training progresses. They further report an “aha moment” during training, which refers to the “emergence” of the model’s ability to reconsider its previously generated content. As we show in Section 3.2, this reconsideration behaviour is often indicated by the generation of phrases such as ‘wait, ...’ or ‘alternatively, ...’
Unless they show you the Markov chain weights (and I've never seen one that does), that's confabulation, not introspection.
Unless you can show the Markov chain weights, I declare all your thoughts confabulation, not introspection.