Comment by procaryote
5 days ago
> I think LLMs are conscious just in a very limited way. I think consciousness is tightly coupled to intelligence.
Why?
5 days ago
> I think LLMs are conscious just in a very limited way. I think consciousness is tightly coupled to intelligence.
Why?
I already answered under the other comment asking me why and if your curious I suggest looking for it.
Very short answer is Karl Friston's free energy pricniple
LLMs work nothing like Karl Friston's free energy principle though
LLMs embody the free-energy principle computationally. They maintain an internal generative model of language and continually minimize “surprise”, the difference between predicted and actual tokens, during both training and infeence. In Friston’s terms, their parameters encode beliefs about the causes of linguistic input; forward passes generate predictions, and backpropagation adjusts internal states to reduce prediction error, just as perception updates beliefs to minimize free energy. During inference, autoregressive generation can be viewed as active inference: each new token selection aims to bring predicted sensory input (the next word) into alignment with the model’s expectations. In a broader sense, LLMs exemplify how a self-organizing system stabilizes itself in a high-dimensional environment by constantly reducing uncertainty about its inputs, a synthetic analogue of biological systems minimizing free energy to preserve their structural and informational coherence.
6 replies →