← Back to context

Comment by scarmig

6 hours ago

Even in supervised ML, pure gradient descent is not the most efficient optimization strategy. E.g., momentum is ubiquitous, and the updates it induces cannot be expressed as a gradient of some scalar loss. But the rotational non-gradient component of its updates substantially improves performance and convergence on the architectures we use.

The brain probably primarily uses something like TD for task learning, which is also not expressible as a gradient of any objective function. And, though the paper mentions Hebbian learning, it's only very particular network architectures (e.g. single neuron; symmetric connections) that you can treat its updates as a gradient of some energy function; these architectures aren't anything close to what we see in the brain.

Pure gradient descent is not what happens in either field, but e.g. momentum is just another parameter constructed from historic gradients. While it is unlikely that the brain runs backpropagation the way you see it implemented in modern ML (same goes for TD btw), the core principle kind of needs to be the same from a pure large scale, high dimensional network efficiency POV. On top of that, adaptive plasticity is almost by definition about estimating useful directions of change. The key insight here would be that the brain does gradient estimation quite cheap and we can probably still learn a thing or two about modern ML from it.