Comment by silvestrov

5 years ago

> Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation.

This assumes that simulation can be done faster than real time. I think it will be the other way around: the brain is the fastest hardware implementation and our simulations will be much slower, like https://en.wikipedia.org/wiki/SoftPC

It also assumes simulation will be numerically stable and not quickly unsable like simulation of weather. We still can't make reliable weather forecasts more than 7 days ahead in areas like Northern Europe.

The brain is pretty much guaranteed to be inefficient. It needs living tissue for one, and we can completely dispense with anything that's not actually involved in computation.

Just like we can make a walking robot without being the least concerned about the details of how bones grow and are maintained -- on the scales needed for walking a bone is a static chunk of material that can be abstracted away without loss.

  • C elegans is a small nematode composed of 959 cells and 302 neurons, where the location, connectivity, and developmental origin/fate of every cell is known.

    We still can't simulate it.

    Part of the problem is that the physical diffusion of chemicals (e.g., neuromodulators) may matter and this is 'dispensed with' in most connectivity-based models.

    Neurons rarely produce identical response to the same stimuli, and their past history (on scales of milliseconds to days) accounts for much of this variability. In larger brains, the electric fields produced by activity in a bundle of nerve fibers may "ephaptically couple" nearby neurons...without actually making contact with them[0].

    In short, we have no idea what can be thrown out.

    [0] This sounds crazy but data from several labs--including mine--suggests it's probably happening.

    • > C elegans is a small nematode [...] We still can't simulate it.

      This for some reason struck me as profoundly disappointing. I have a couple neuroscientist friends, so I tend to hear a lot about their work and about interesting things happening in the field, but of course I'm a rank layperson myself. I guess I expected/hoped that we'd be able to do more with simpler creatures.

      If we can't simulate C elegans, are there less complex organism we can simulate accurately? What's the limit of complexity before it breaks down?

      2 replies →

    • > We still can't simulate it.

      Interesting. Can you give a rough estimate of how much effort has been put into studying it (wall time, researcher-years, money) and how much progress has been made?

      Also, is there any estimate of how similar C. elegans neurons are to those of other species, such as humans?

      4 replies →

    • > We still can't simulate it

      302 neurons seems very easy to simulate, even if the connectivity graph were orders of magnitude more complex.

      Simulating correctly... that is another thing, I'm sure.

  • anything that's not actually involved in computation.

    This doesn't seem like a very easy problem to solve.

It's the fastest we currently have but pretty unlikely to be the fastest allowed by the laws of physics. Evolution isn't quite that perfect - e.g. the fastest flying animals are nowhere near the top flying speed that can be achieved. Why would the smartest animal be at the very limit of what's possible in terms of speed of thinking or anything else?

In the context of the story we're responding to, it does mention that they can be simulated at at least 100x speed at the time of writing.

Human synapses top out at <100 Hz and the human brain has <10^14 of them. Single silicon chips are >10^10 transistors, operating at >10^9 Hz. Naively, a high end GPU is capable of more state transitions than the human brain by a factor of 1000. That figure for the brain also includes memory; the GPU doesn't. The human brain runs on impressively little power and is basically self-manufacturing, but it's WAY less compact or intricate than a $2000 processor.

The capabilities of the brain are in how it's all wired up. That's exactly what you don't want if you're trying to coopt it to do something else. The brain has giant chunks devoted to extremely specialized purposes: https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File...

How do you turn that into a workhorse? It would be incredibly difficult. It's like looking at a factory floor and saying oh, look at all that power- lets turn it into a racecar! You can't just grab a ton of unrelated systems and expect them to work together on a task for you.

  • You're making the implicit assumption that synapses === binary bits, and that synapses are the only thing important to the brains computation. I would be surprised if either of those things were the case.

  • I don’t think a bit transition is in any way comparable to the “event transmission” to a potentially extremely large number of interconnected other neurons.

    An actor-based system would be a better model, and I’m not sure if we have something like that in hardware. I do agree that sometime in the future it will be possible to overcome the biological limit, as cells are most definitely not at an optimum (probably not even at a local one), like duplicated pathways and the like, but it is no way trivial.

    John von Neumann had a great paper on the topic, at least his thoughts about it. It is a really great read, even though both technological and biological advances may make it outdated, I think he did see a few things clearly into the future.