Comment by fluoridation
2 days ago
Did I say "random graph", or did I say "general graph"?
>There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me.
I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ...
It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn.
Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence.
So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not.