← Back to context

Comment by eikenberry

2 days ago

> If we suppose that ANNs are more or less accurate models of real neural networks [..]

IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.

Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.

By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.

  • >I haven't heard anything about biological systems doing something comparable to backpropogation

    The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common.

    • Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture.

      4 replies →

  • Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that.

    • Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not.

      If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture.

      OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage.