Comment by dale_glass
5 years ago
It's interesting, but strikes me as very unrealistic. I don't think it'd go that way. In fact, it'd be far more horrifying.
We wouldn't bother trying to convince an image of a brain into cooperation, because we simply lose any need to do that very quickly.
One of the very first things we'd do with a simulated brain is to debug it. Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation. And I'm sure it wouldn't take long to start getting some sort of interesting result, first superficial then deeper and deeper.
Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.
And that's when the truly freaky stuff starts. Using such a tool we could figure out many things about a brain's inner workings. How do we truly respond to advertising? How to produce maximum anger and maximum cooperation? How to best implant false memories? How to craft a convincing lie? What are the bugs and flaws in human perception? We could fuzz it and see if we can crash a brain.
We've already made some uncomfortable advancements, eg in how free to play games intentionally try to create addiction. With such a tool at our disposal we could fine tune strategies without having to guess. Eventually we'd just know which bits of the brain we want to target and just have to find ways of getting the right things to percolate down the neural network until those bits are affected in the ways we want.
Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.
This seems rather optimistic to me. There are days when I count myself lucky to be able to debug my own code. And it's maybe about seven orders of magnitude less complex. And has comments. And unit tests.
I'd be willing to bet that once we've achieved the ability to scan and simulate brains at high fidelity, we'll still be far, far, far away from understanding how their spaghetti code creates emergent behaviour. We'll have created a hyper-detailed index of our incomprehension. Even augmented by AI debuggers, comprehension will take a long long time.
Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.
> Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.
How can you be so sure of that?
Because he stated he's not a scientist with a jarred brain in his possession (to his knowledge/current memory state), not that he has his own brain in a jar, which, while possible, is most unlikely.
Yes, I'm fun at parties.
3 replies →
This depends on availability of debug/test/research environment for brain images.
There are 20M sw developers on this planet. If 100k of them had daily available dev environment for brain images, then things would progress extremely fast.
Well, training a neural network is not significantly different from how you train a brain. You don’t need to know the inputs as long as it produces the right outputs.
> Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation.
This assumes that simulation can be done faster than real time. I think it will be the other way around: the brain is the fastest hardware implementation and our simulations will be much slower, like https://en.wikipedia.org/wiki/SoftPC
It also assumes simulation will be numerically stable and not quickly unsable like simulation of weather. We still can't make reliable weather forecasts more than 7 days ahead in areas like Northern Europe.
The brain is pretty much guaranteed to be inefficient. It needs living tissue for one, and we can completely dispense with anything that's not actually involved in computation.
Just like we can make a walking robot without being the least concerned about the details of how bones grow and are maintained -- on the scales needed for walking a bone is a static chunk of material that can be abstracted away without loss.
C elegans is a small nematode composed of 959 cells and 302 neurons, where the location, connectivity, and developmental origin/fate of every cell is known.
We still can't simulate it.
Part of the problem is that the physical diffusion of chemicals (e.g., neuromodulators) may matter and this is 'dispensed with' in most connectivity-based models.
Neurons rarely produce identical response to the same stimuli, and their past history (on scales of milliseconds to days) accounts for much of this variability. In larger brains, the electric fields produced by activity in a bundle of nerve fibers may "ephaptically couple" nearby neurons...without actually making contact with them[0].
In short, we have no idea what can be thrown out.
[0] This sounds crazy but data from several labs--including mine--suggests it's probably happening.
11 replies →
anything that's not actually involved in computation.
This doesn't seem like a very easy problem to solve.
It's the fastest we currently have but pretty unlikely to be the fastest allowed by the laws of physics. Evolution isn't quite that perfect - e.g. the fastest flying animals are nowhere near the top flying speed that can be achieved. Why would the smartest animal be at the very limit of what's possible in terms of speed of thinking or anything else?
In the context of the story we're responding to, it does mention that they can be simulated at at least 100x speed at the time of writing.
Human synapses top out at <100 Hz and the human brain has <10^14 of them. Single silicon chips are >10^10 transistors, operating at >10^9 Hz. Naively, a high end GPU is capable of more state transitions than the human brain by a factor of 1000. That figure for the brain also includes memory; the GPU doesn't. The human brain runs on impressively little power and is basically self-manufacturing, but it's WAY less compact or intricate than a $2000 processor.
The capabilities of the brain are in how it's all wired up. That's exactly what you don't want if you're trying to coopt it to do something else. The brain has giant chunks devoted to extremely specialized purposes: https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File...
How do you turn that into a workhorse? It would be incredibly difficult. It's like looking at a factory floor and saying oh, look at all that power- lets turn it into a racecar! You can't just grab a ton of unrelated systems and expect them to work together on a task for you.
You're making the implicit assumption that synapses === binary bits, and that synapses are the only thing important to the brains computation. I would be surprised if either of those things were the case.
I don’t think a bit transition is in any way comparable to the “event transmission” to a potentially extremely large number of interconnected other neurons.
An actor-based system would be a better model, and I’m not sure if we have something like that in hardware. I do agree that sometime in the future it will be possible to overcome the biological limit, as cells are most definitely not at an optimum (probably not even at a local one), like duplicated pathways and the like, but it is no way trivial.
John von Neumann had a great paper on the topic, at least his thoughts about it. It is a really great read, even though both technological and biological advances may make it outdated, I think he did see a few things clearly into the future.
Your comment reminded me of a clever and well-written short story called "Understand" by Ted Chiang.
> We could fuzz it and see if we can crash a brain.
Sadly, this we already know. Torture, fear, depression, regret; we have a wide selection to choose from if we want to "crash a brain".
I don't mean it quite like that.
Think for instance of a song that got stuck in your head. It probably hits some parts of it just right. What if we could fine tune that? What if we take a brain simulator, a synthesizer, and write a GA that keeps on trying to create a sound that hits some maximum?
It's possible that we could make something that would get it stuck in your head, or tune it until it's almost a drug in musical form.
What you're talking about is getting pretty close to a Basilisk - https://en.wikipedia.org/wiki/David_Langford#Basilisks
3 replies →
I've no experience with it, but I imagine it's like heroine or DMT or something like that. Wouldn't that come close to something that "hits some maximum"?
Brains still operate as brains after severe trauma. They just don't necessarily operate well as humans in a society. Though I guess you could say making a brain destroy itself (suicide) is "crashing it" too
> Brains still operate as brains after severe trauma
Well, except when they don't, but since a brain functioning as a brain is part of the operating requirements for the body that lets the brain operate at all, when they don't, they ultimately fail entirely in short order.
So, assuming that a brain generally operates as a brain after severe trauma is a pretty serious case of survivorship bias.
1 reply →
My first thought was that this reminded me of an epileptic seizure brought on by "fuzzing" (sensory overload)
I think that's pretty plausible.
Ted Chiang’s “life cycle of software objects” is also similar to the OP. Basically about how an AI (not strictly an upload) would probably be subjected to all sorts of horrible shit if it was widely available.
From the title "lena" and the reference to compression algorithms made with MMAlcevedo, it's clear that the story is trying to draw parallels to image processing. In which case being able to store images has come decades before realistic 3D rendering, photoshop, or even computer vision. For example, the sprites from some early video games look like they were modeled in 3D, but were actually images based off of photographs of clay models. I think (with suspension of disbelief that sinulating consciousness is possible) it is realistic to think that being able to capture consciousness would come before being able to understand and manipulate it.
It sounds like, in this world, a lot of the value of a simulated brain is in the as-yet-indescribable complexity of human cognition. If you debug a brain to remove the parts of it that are uncooperative, you likely have to remove the parts of it that have opinions of any sort about the task on which it's working, which seems like it would defeat the value of using a brain at all. If you're giving a task to a simulated brain, it's because it's beyond the reach of what you can efficiently ask a program to do, and you want the subconscious reactions, development of instinct, and deep unplanned reasoning that you get out of asking an educated and experienced human to think about a task. You can likely tweak a simulated brain into cooperation, sure, but you'd have very few guarantees of not breaking those mechanisms while you're at it.
If you can describe the task to be performed well enough that you don't need the je-ne-sais-quoi of a human brain to perform it, you may as well just have a regular computer program do it. (We already have very efficient systems that involve extracting limited amounts of creativity and insight from human brains and forming them into repeatable tasks that can be run on computers - that's what the entire software industry is about.)
Simulation and models are not real. Maybe some "attacks" could be developed against a simulated mind, but are they due to the mind itself or the underlying infrastructure? Just because you can simulate a warp drive in software doesn't mean you can build a FTL ship.
In the case of a warp drive we care about a physical result (FTL travel), not a computational result.
We already have emulators and virtual machines for lots of old hardware and software. If I play a Super Nintendo game on my laptop, it's accurately emulating an SNES. The software doesn't care that the original hardware is long gone. The computational result is the same (or close enough to not matter for my purposes). If brain emulations are possible, then running old snapshots in deceptive virtual environments is possible. That would allow for all of the "attacks" described in this piece of fiction.
There are many bugs emulator developers (game console ans otherwise) have faced because of undocumented or emergent properties of the original hardware. Some games required those properties to function.
1 reply →
The way I understand the story is that you have a scan of the relevant physical structure of the brain, plus the knowledge of how to simulate every component precisely enough. You may not know how different parts interact with each other, but that doesn't prevent correct functioning.
Just like you can have somebody assemble a complex device by just putting together pieces and following instructions. You could for instance assemble a working analog TV without understanding how it works. It's enough to have the required parts, and a wiring plan. Once you have a working device then you can poke at it and try and figure out what different parts of it do.
"Execute it step by step,"
These are not imperative programs or well organized data. They are NN's we can't fathom how to debug them just yet.
Also, they should tag 100 years onto the timeline, I don't think we're going to be truly making useful images soon.
Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.
What starts out as mere science will easily be repurposed by its financial backers to do this in real time to non-consenting subjects in Guantanamo Bay and then in your local area.
I think it’s possible that we’ll be able to run large simulations on models whose mechanics we can’t really understand very well. It’s not a given we’ll be able to step through a sequence of states. Even more so if it involves quantum computation.
Many of the things you describe could still happen with Monte-Carlo type methods, providing statistical understanding but not full reverse engineering.
>> how to best create discord, or how to best destroy a human being by just talking to them.
In some cases therapists do this already. Techniques have intended effects which may differ from actual effects. The dead never get to understand or explain what went wrong.
> Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.
It seems like we’re close to that already.
They teach that stuff at the School of the Americas in Fort Benning, Georgia. (Now called WhinSec to try to get away from its past)
Sounds like trained networks to efficiently manipulate uploaded brains would be a thing in your scenario.
Could use ML to reduce manual 'debug' overhead, spooky stuff.