Comment by pjs_
16 hours ago
Be careful about how you interpret that paper. It looks really impressive -- real neurons in a petri dish seem to successfully (if amateurishly) murk a few imps.
https://www.youtube.com/watch?v=yRV8fSw6HaE
But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:
https://github.com/SeanCole02/doom-neuron
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
All opinions are my own:
The whole point of the CNNs is to act like a auto encoder for input and an auto decoder for output. The only reason why this is done in the first place is because the number of electrodes in the dish is pitiful and has no chance of describing something as complex as Doom. They are there to create a latent space that can be fed through 60 odd electrodes and decode the neuron latent space into pressing buttons.
The pong version of the game was the proof of concept that neurons can learn without a latent space intermediate in either direction. Both the world state and neuronal control were raw signals: https://pubmed.ncbi.nlm.nih.gov/36228614/
What I wanted to do after dish brain pong, but never had the budget for, was using live animals as the computational substrate. Use the visual cortex of one as the input, send the neural spikes to a second animals frontal lobe for computation and finally send those signals to a third animals motor cortex to physically press buttons. It's a shame we never raised enough because it wouldn't have cost more than $15m to build the hardware and do the biological proof of concept.
> using live animals as the computational substrate. Use the visual cortex of one as the input, send the neural spikes to a second animals frontal lobe for computation and finally send those signals to a third animals motor cortex to physically press buttons.
That sounds terrifying.
It does but most of what we do to animals is terrifying. I could see why getting funding for this idea might not have been that easy though "I want to mind control three animals to play Doom" is certainly a pitch
1 reply →
> The only reason why this is done in the first place is because the number of electrodes in the dish is pitiful and has no chance of describing something as complex as Doom.
This sounds a bit suspicious though. If we're confident that the neurons aren't complex enough to understand Doom, how can they be said to be complex enough to play it? Playing a game is a loose term but it seems difficult to say that it is playing something that it can't comprehend or interact with. By analogy, if there was a CNN between me and a game of Doom people would say "roenxi is cheating with an AI aim-bot", not 'roenxi is playing Doom".
The whole thing is still pretty cool though. Hopefully the neurons are having fun, I'm sure we all wish them what happiness they can muster.
There isn't enough input electrodes to encode a doom frame into the multi electrode array without compression.
That's all the artificial neural networks are doing.
If we could have gotten an MEA with 320x200 electrodes we wouldn't have used any encoding and just let the neurons figure it out. Instead it is an 8x8 grid.
1 reply →
This sounds nightmarish. Maybe we build a human centipede if we can get the VC funding next?
Or a Torment Nexus!!
I would have been quite happy to use my own brain as the computational substrate and I had more than a few other people keen to be the input and output parts of the system.
It's rather unfortunate that in the West it is impossible to get elective brain surgery. The countries that will do it have at best a spotty record. I talked to someone who had it done in Brazil and their electrodes became dislodged after a few months.
There is nothing new or horrifying about self experimentation. Newton for one did it in conditions that were far more dangerous: https://psmag.com/social-justice/newtons-needle-scientific-s...
4 replies →
Reminds me of the head transplant experiments. The stuff of nightmares but also fascinating.
Gosh it's been years, but I think they did the dual animal experiment with rats about a decade ago. I'm likely misremembering but they tickled a rat in Japan and fed the impulses into the internet and had another rat in maybe Brazil move it's tail in response. From what I recall it did potentiate over time, implying learning at the more reflex level. Sorry I can't find the link though!
Hahaha I love how you made something that wouldn’t be harmful sound like a nightmare horror show.
Edit sweet Jesus never mind I missread it.
Yes...quite a shame that we never made a amalgamation cyborg horror out of parts and pieces of several different animals. That's definitely not the plot of every sci-fi horror movie.
>What I wanted to do after dish brain pong, but never had the budget for, was using live animals as the computational substrate.
What does the ethical due diligence process look like, for something like this?
Haha, you made me laugh quite a bit, like ethical due diligence was even a bleep in the mental model of someone who talks like that about sentiment life forms.
Someone should try to replace the neurons with urand and see if the chip can still play Doom, in the spirit of the qday prize winner.
Reminds me of the ship of theseus philosophical experiment where they replace neurons by logic gates one by one and ask when exactly consciousness stops existing.
i dont think its clear that logic gates can ever replace neurons in the first place
This reminds me of https://news.ycombinator.com/item?id=47897647, where a quantum computing demo worked equally well if you replaced the QC with an entropy source.
> but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit.