← Back to context

Comment by ctoth

3 days ago

> When you have a thought, are you "predicting the next thing"

Yes. This is the core claim of the Free Energy Principle[0], from the most-cited neuroscientist alive. Predictive processing isn't AI hype - it's the dominant theoretical framework in computational neuroscience for ~15 years now.

> much of our experience of the world does not entail predicting things

Introspection isn't evidence about computational architecture. You don't experience your V1 doing edge detection either.

> How confident are you that the abstractions "search" and "thinking"... are really equatable?

This isn't about confidence, it's about whether you're engaging with the actual literature. Active inference[1] argues cognition IS prediction and action in service of minimizing surprise. Disagree if you want, but you're disagreeing with Friston, not OpenAI marketing.

> How does Heisenberg's famous principle complicate this

It doesn't. Quantum uncertainty at subatomic scales has no demonstrated relevance to cognitive architecture. This is vibes.

> Companies... are claiming these tools do more than they are actually capable of

Possibly true! But "is cognition fundamentally predictive" is a question about brains, not LLMs. You've accidentally dismissed mainstream neuroscience while trying to critique AI hype.

[0] https://www.nature.com/articles/nrn2787

[1] https://mitpress.mit.edu/9780262045353/active-inference/

Thanks for the links! I'll have to dig into this more for sure. Looking at the bulleted summary, I'm not sure your argument is sufficiently nuanced or being made in good faith.

The article argues that the brain "predicts" acts of perception in order to minimize surprise. First of all, very few people mean to talk about these unconscious operations of the brain when they claim they are "thinking". Most people have not read enough neuroscience literature to have such a definition. Instead, they tend to mean "self-conscious activity" when they say "thinking". Thinking, the way the term is used in the vernacular, usually implies some amount of self-reflexivity. This is why we have the term "intuition" as opposed to thinking after all. From a neuronal perspective, intuition is still thinking, but most people don't think (ha) of the word thinking to encompass this, and companies know that.

It is clear to me, as it is to everyone one the planet, that when OpenAI for example claims that ChatGPT "thinks" they want consumers to make the leap to cognitive equivalence at the level of self-conscious thought, abstract logical reasoning, long-term learning, and autonomy. These machines are designed such that they do not even learn and retain/embed new information past their training date. That already disqualifies them from strong equivalence to human beings, who are able to rework their own tendencies toward prediction in a meta cognitive fashion by incorporating new information.

How does the free energy principle align with system dynamics and the concept of emergence? Yes, our brain might want to optimize for lack of surprise, but that does not mean it can fully avoid emergent or chaotic behavior stemming from the incredibly complex dynamics of the linked neurons?

  • FEP doesn't conflict with complex dynamics, it's a mathematical framework for explaining how self-organizing behavior arises from simpler variational principles. That's what makes it a theory rather than a label.

    The thing you're doing here has a name: using "emergence" as a semantic stopsign. "The system is complex, therefore emergence, therefore we can't really say" feels like it's adding something, but try removing the word and see if the sentence loses information.

    "Neurons are complex and might exhibit chaotic behavior" - okay, and? What next? That's the phenomenon to be explained, not an explanation.

    This was articulated pretty well 18 years ago [0].

    [0]: https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futili...

    • This essay completely misunderstands how the notion of emergence gained prominence and how people tend to actually use it. It's a straw man that itself devolves into a circular argument "embrace a reductionist epistemology because you should embrace a reductionist epistemology".

      It doesn't even meaningfully engage with the historical literature that established the term, etc. If you want to actually understand why the term gained prominence, check out the work of Edgar Morin.