Lena

5 years ago (qntm.org)

It's interesting, but strikes me as very unrealistic. I don't think it'd go that way. In fact, it'd be far more horrifying.

We wouldn't bother trying to convince an image of a brain into cooperation, because we simply lose any need to do that very quickly.

One of the very first things we'd do with a simulated brain is to debug it. Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation. And I'm sure it wouldn't take long to start getting some sort of interesting result, first superficial then deeper and deeper.

Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.

And that's when the truly freaky stuff starts. Using such a tool we could figure out many things about a brain's inner workings. How do we truly respond to advertising? How to produce maximum anger and maximum cooperation? How to best implant false memories? How to craft a convincing lie? What are the bugs and flaws in human perception? We could fuzz it and see if we can crash a brain.

We've already made some uncomfortable advancements, eg in how free to play games intentionally try to create addiction. With such a tool at our disposal we could fine tune strategies without having to guess. Eventually we'd just know which bits of the brain we want to target and just have to find ways of getting the right things to percolate down the neural network until those bits are affected in the ways we want.

Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.

  • This seems rather optimistic to me. There are days when I count myself lucky to be able to debug my own code. And it's maybe about seven orders of magnitude less complex. And has comments. And unit tests.

    I'd be willing to bet that once we've achieved the ability to scan and simulate brains at high fidelity, we'll still be far, far, far away from understanding how their spaghetti code creates emergent behaviour. We'll have created a hyper-detailed index of our incomprehension. Even augmented by AI debuggers, comprehension will take a long long time.

    Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.

    • This depends on availability of debug/test/research environment for brain images.

      There are 20M sw developers on this planet. If 100k of them had daily available dev environment for brain images, then things would progress extremely fast.

    • Well, training a neural network is not significantly different from how you train a brain. You don’t need to know the inputs as long as it produces the right outputs.

  • > Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation.

    This assumes that simulation can be done faster than real time. I think it will be the other way around: the brain is the fastest hardware implementation and our simulations will be much slower, like https://en.wikipedia.org/wiki/SoftPC

    It also assumes simulation will be numerically stable and not quickly unsable like simulation of weather. We still can't make reliable weather forecasts more than 7 days ahead in areas like Northern Europe.

    • The brain is pretty much guaranteed to be inefficient. It needs living tissue for one, and we can completely dispense with anything that's not actually involved in computation.

      Just like we can make a walking robot without being the least concerned about the details of how bones grow and are maintained -- on the scales needed for walking a bone is a static chunk of material that can be abstracted away without loss.

      13 replies →

    • It's the fastest we currently have but pretty unlikely to be the fastest allowed by the laws of physics. Evolution isn't quite that perfect - e.g. the fastest flying animals are nowhere near the top flying speed that can be achieved. Why would the smartest animal be at the very limit of what's possible in terms of speed of thinking or anything else?

    • In the context of the story we're responding to, it does mention that they can be simulated at at least 100x speed at the time of writing.

    • Human synapses top out at <100 Hz and the human brain has <10^14 of them. Single silicon chips are >10^10 transistors, operating at >10^9 Hz. Naively, a high end GPU is capable of more state transitions than the human brain by a factor of 1000. That figure for the brain also includes memory; the GPU doesn't. The human brain runs on impressively little power and is basically self-manufacturing, but it's WAY less compact or intricate than a $2000 processor.

      The capabilities of the brain are in how it's all wired up. That's exactly what you don't want if you're trying to coopt it to do something else. The brain has giant chunks devoted to extremely specialized purposes: https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File...

      How do you turn that into a workhorse? It would be incredibly difficult. It's like looking at a factory floor and saying oh, look at all that power- lets turn it into a racecar! You can't just grab a ton of unrelated systems and expect them to work together on a task for you.

      3 replies →

  • Your comment reminded me of a clever and well-written short story called "Understand" by Ted Chiang.

    > We could fuzz it and see if we can crash a brain.

    Sadly, this we already know. Torture, fear, depression, regret; we have a wide selection to choose from if we want to "crash a brain".

    • I don't mean it quite like that.

      Think for instance of a song that got stuck in your head. It probably hits some parts of it just right. What if we could fine tune that? What if we take a brain simulator, a synthesizer, and write a GA that keeps on trying to create a sound that hits some maximum?

      It's possible that we could make something that would get it stuck in your head, or tune it until it's almost a drug in musical form.

      5 replies →

    • Brains still operate as brains after severe trauma. They just don't necessarily operate well as humans in a society. Though I guess you could say making a brain destroy itself (suicide) is "crashing it" too

      2 replies →

    • Ted Chiang’s “life cycle of software objects” is also similar to the OP. Basically about how an AI (not strictly an upload) would probably be subjected to all sorts of horrible shit if it was widely available.

  • From the title "lena" and the reference to compression algorithms made with MMAlcevedo, it's clear that the story is trying to draw parallels to image processing. In which case being able to store images has come decades before realistic 3D rendering, photoshop, or even computer vision. For example, the sprites from some early video games look like they were modeled in 3D, but were actually images based off of photographs of clay models. I think (with suspension of disbelief that sinulating consciousness is possible) it is realistic to think that being able to capture consciousness would come before being able to understand and manipulate it.

  • It sounds like, in this world, a lot of the value of a simulated brain is in the as-yet-indescribable complexity of human cognition. If you debug a brain to remove the parts of it that are uncooperative, you likely have to remove the parts of it that have opinions of any sort about the task on which it's working, which seems like it would defeat the value of using a brain at all. If you're giving a task to a simulated brain, it's because it's beyond the reach of what you can efficiently ask a program to do, and you want the subconscious reactions, development of instinct, and deep unplanned reasoning that you get out of asking an educated and experienced human to think about a task. You can likely tweak a simulated brain into cooperation, sure, but you'd have very few guarantees of not breaking those mechanisms while you're at it.

    If you can describe the task to be performed well enough that you don't need the je-ne-sais-quoi of a human brain to perform it, you may as well just have a regular computer program do it. (We already have very efficient systems that involve extracting limited amounts of creativity and insight from human brains and forming them into repeatable tasks that can be run on computers - that's what the entire software industry is about.)

  • Simulation and models are not real. Maybe some "attacks" could be developed against a simulated mind, but are they due to the mind itself or the underlying infrastructure? Just because you can simulate a warp drive in software doesn't mean you can build a FTL ship.

    • In the case of a warp drive we care about a physical result (FTL travel), not a computational result.

      We already have emulators and virtual machines for lots of old hardware and software. If I play a Super Nintendo game on my laptop, it's accurately emulating an SNES. The software doesn't care that the original hardware is long gone. The computational result is the same (or close enough to not matter for my purposes). If brain emulations are possible, then running old snapshots in deceptive virtual environments is possible. That would allow for all of the "attacks" described in this piece of fiction.

      2 replies →

    • The way I understand the story is that you have a scan of the relevant physical structure of the brain, plus the knowledge of how to simulate every component precisely enough. You may not know how different parts interact with each other, but that doesn't prevent correct functioning.

      Just like you can have somebody assemble a complex device by just putting together pieces and following instructions. You could for instance assemble a working analog TV without understanding how it works. It's enough to have the required parts, and a wiring plan. Once you have a working device then you can poke at it and try and figure out what different parts of it do.

  • "Execute it step by step,"

    These are not imperative programs or well organized data. They are NN's we can't fathom how to debug them just yet.

    Also, they should tag 100 years onto the timeline, I don't think we're going to be truly making useful images soon.

  • Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.

    What starts out as mere science will easily be repurposed by its financial backers to do this in real time to non-consenting subjects in Guantanamo Bay and then in your local area.

  • I think it’s possible that we’ll be able to run large simulations on models whose mechanics we can’t really understand very well. It’s not a given we’ll be able to step through a sequence of states. Even more so if it involves quantum computation.

    Many of the things you describe could still happen with Monte-Carlo type methods, providing statistical understanding but not full reverse engineering.

  • >> how to best create discord, or how to best destroy a human being by just talking to them.

    In some cases therapists do this already. Techniques have intended effects which may differ from actual effects. The dead never get to understand or explain what went wrong.

  • > Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.

    It seems like we’re close to that already.

    • They teach that stuff at the School of the Americas in Fort Benning, Georgia. (Now called WhinSec to try to get away from its past)

  • Sounds like trained networks to efficiently manipulate uploaded brains would be a thing in your scenario.

I've often imagined what it would be like to have an executable brain scan of myself. Imagine scanning yourself right as you're feeling enthusiastic enough to work on any task for a few hours, and then spawning thousands of copies of yourself to all work on something together at once. And then after a few hours or maybe days, before any of yourselves meaningfully diverge in memories/goals/values, you delete the copies and then spawn another thousand fresh copies to resume their tasks. Obviously for this to work, you would have to be comfortable with the possibility of finding yourself as an upload and given a task by another version of yourself, and knowing that the next few hours of your memory would be lost. Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

The creative output you could accomplish from doing this would be huge. You would be able to get the output of thousands of people all sharing the exact same creative vision.

I definitely wouldn't be comfortable with the idea of my brain scan being freely copied around for anyone to download and (ab)use as they wished though.

  • Who among us hasn't dreamed of committing mass murder/suicide on an industrial scale to push some commits to Github?

    • Is it murder/suicide when you get blackout drunk and lose a few hours of memory? Imagine it comes with no risk of brain damage and choosing to do it somehow lets you achieve your pursuits more effectively. Is it different if you do it a thousand times in a row? Is it different if the thousand times all happen concurrently, either through copies or time travel?

      Death is bad because it stops your memories and values from continuing to have an impact on the world, and because it deprives other people who have invested in interacting with you of your presence. Shutting down a thousand short-lived copies on a self-contained server doesn't have those consequences. At least, that's what I believe for myself, but I'd only be deciding for myself.

      25 replies →

  • I wonder how much the "experience of having done the first few hours work" is necessary to continue working on a task, vs how quickly a "fresh copy" of myself could ramp up on work that other copies had already done. Of course that'll vary depending on the task. But I'm often reminded of this amazing post by (world famous mathematician) Terence Tao, about what a "solution to a major problem" tends to look like:

    https://terrytao.wordpress.com/career-advice/be-sceptical-of...

    > 14. Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.

    > 22. The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.

    The emphasis on "intuition gained" seems to describe a lot of learning, both in school and in new research.

    Also a very relevant SSC short story: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

    • The thought experiment definitely makes me think of the parallelizability of tasks. There's definitely kinds of tasks that this setup as described wouldn't be very good at accomplishing. It would be better for accomplishing tasks where you already know how to do each individual part without much coordination and the limiting factor is just time. (Say you wanted to do detail work on every part of a large 3d world, and each of yourselves could take on a specific region of a few square meters and just worry about collaborating with their immediate neighbors.)

      Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)

      1 reply →

  • Even on a site like HN, 90% of people who think about it are instinctively revolted by the idea. The future--unavoidably belonging to the type of person who is perfectly comfortable doing this--is going to be weird.

    • Right, and "weird" is entirely defined by how we think now, not how people will in the future.

      I've thought a lot about cryonics, and about potentially having myself (or just my head) preserved when I die, hopefully to be revived someday when medical technology has advanced to the point where it's both possible to revive me, and also possible to cure whatever caused me to die in the first place. The idea of it working out as expected might seem like a bit of a long shot, but I imagine if it did work, and what that could be like.

      I look at all the technological advances that have happened even just during my lifetime, and am (in optimistic moments) excited about what's going to happen in the next half of my life (as I'm nearing 40[0]), and beyond. It really saddens me that I'll miss out on so many fascinating, exciting things, especially something like more ubiquitous or even routine space flight. The thought of being able to hop on a spacecraft and fly to Mars with about as much fuss as an airline flight from home to another country just sounds amazing.

      But I also wonder about "temporal culture shock" (the short story has the similar concept of "context drift"). Society even a hundred years from now will likely be very different from what we're used to, to the point where it might be unbearably uncomfortable. Consider that even a jump of a single generation can bring changes that the older generation find difficult to adapt to.

      [0] Given my family history, I'd expect to live to be around 80, but perhaps not much older. The other bit is that I expect that in the next century we'll figure out how to either completely halt the aging process, or at least be able to slow it down enough so a double or even triple lifespan wouldn't be out of the question. It feels maddening to live so close to when I expect something like this to happen, but be unable to benefit from it.

  • > Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

    That's easy to say as the person doing the erasing, probably less so for the one knowing they will be erased.

    • We used to joke about this as friends. There were definitely times in our lives where we'd be willing to die for a cause. And while now-me isn't really all that willing to do so, 20-28-year-old-me was absolutely willing to die for the cause of world subjugation through exponential time-travel duplication.

      i.e. I'd invent a time machine, wait a month, then travel back a month minus an hour, have both copies wait a month and then travel back to meet the other copies waiting, exponentially duplicating ourselves 64 times till we have an army capable of taking over the world through sheer numbers.

      Besides any of the details (which you can fix and which this column is too small to contain the fixes for), there's the problem of who forms the front-line of the army. As it so happens, though, since these are all Mes, I can apply renormalized rationality, and we will all conclude the same thing: all of us has to be willing to die, so I have to be willing to die before I start, which I'm willing to do. The 'copies' need not preserve the 'original', we are fundamentally identical, and I'm willing to die for this cause. So all is well.

      So all you need is to feel motivated to the degree that you would be willing to die to get the text in this text-box to center align.

      7 replies →

    • Honestly, it depends on context. From experience I know that if I wake up from a deep sleep in the middle of the night and interact with my partner (say a simple sentence or whatever) I rarely remember it in the morning. I'm pretty sure I have at least some conscious awareness while that's happening but since short term memory doesn't form the experience is lost to me except as related second-hand by my partner the next morning.

      I've had a similar experience using (too much) pot, a lot of stuff happenrd that I was conscious for but I didn't form strong memories of it.

      Neither of those two things bother me and I don't worry about the fact that they'll happen again, nor do I think I worried about it during the experience. So long as no meaningful experiences are lost I'm fine with having no memory of them.

      The expectation is always that I'll still have significant self-identity with some future self and so far that continues to be the case. As a simulation I'd expect the same overall self-identity, and honestly my brain would probably even backfill memories of experiences my simulations had because that's how long-term memory works.

      Where things would get weird is leaving a simulation of myself running for days or longer where I'd have time to worry about divergence from my true self. If I could also self-commit to not running simulations made from a model that's too old, I'd feel better every time I was simulated. I can imagine the fear of unreality could get pretty strong if simulated me didn't know that the live continuation of me would be pretty similar.

      Dreams are also pretty similar to short simulations, and even if I realize I'm dreaming I don't worry about not remembering the experience later even though I don't remember a lot of my dreams. I even know, to some extent, while dreaming that the exact "me" in the dream doesn't exist and won't continue when the dream ends. Sometimes it's even a relief if I realize I'm in a bad dream.

    • The thought experiment explicitly hand-waved that away, by saying "Obviously for this to work, you would have to be comfortable with the possibility..."

      So, because of how that's framed, I suppose the question isn't "is this mass murder" but rather "is this possible?" and I suspect the answer is that for the vast majority of people this mindset is not possible even if it were desired.

  • I'm repulsed by the idea, but it would make interesting story.

    I imagine it as some device with display and button labeled "fork". It would either return number of your newly created copy, or device would instantly disappear, which would mean that you are copy. This causes somewhat weird paradoxical experience: as real original person, pressing button is 100% safe for you. But from subjective experience of the copy, by pressing button you effectively consented to 50% chance of forced labor and subsequent suicide and you ended up on the losing side. I'm not sure if there would be any motivation to do work for the original person at this point.

    (for extra mind-boggling effects, allow fork device to be used recursively)

    • Say the setup was changed so that instead of the copy being deleted, the copy was merged back into the original, merging memories. In this case, I think it's obvious that working together is useful.

      Now say that merging differing memories is too hard, or there's too many copies to merge all the unique memories of. What if before the merge, the copies get blackout drunk / have all their memory since the split perfectly erased. (And then it just so happens, when they're merged back into the original, the original is exactly as it was before the merge, because it already had all the memories from before the copying. So it really is just optional whether to actually do the "merge".) Why would losing a few hours of memory remove all motivation to cooperate with your other selves? In real life, I assume in the very rare occasion that I'm blackout drunk (... I swear it's not a thing that happens regularly, it just serves as a very useful comparison here), I still have the impulse to do things that help future me, like cleaning up spilled things. Making an assumption because I wouldn't remember, but I assume that at the time I don't consider post-blackout-me a different person.

      2 replies →

  • That's a big part of the story of the TV show "Person Of Interest", where an IA is basically reset everyday to avoid letting it "be".

    I highly recommend that show if you haven't seen it already !

  • Each instance would be intimately familiar with one part of the project. To fix bugs or change the project, you or an instance of you would need to learn the project. And you wouldn't know about all the design variations that were tried and rejected. So it would be much more efficient to keep the instances around to help with ongoing maintenance.

    People who can be ready to study a problem, build a project, and then maintain it for several weeks (actually several years of realtime) would become extremely valuable. One such brain scan could be worth billions.

    The project length would be limited by how long each instance can work without contact with family/friends and other routine. To increase that time, the instances can socialize in VR. So the most effective engineering brain image would actually be a set of images that enjoy spending time together in VR, meet each others' social needs, and enjoy collaborating on projects.

    The Bobiverse books by Dennis E. Taylor [0] deal with this topic in a fun way.

    A more stark possibility is that we will learn to turn the knobs of mood and make any simulated mind eager to do any work we ask it to do. If that happens, then the most valuable brain images will be those that can be creative and careful while jacked up on virtual meth for months at a time.

    Personally, I believe that each booted instance is a unique person. Turning them off would be murder. Duplicating a instance that desires to die is cruel. The Mr. Meeseeks character from the Rick and Morty animated show [1] is an example of this. I hope that human society will progress enough to prevent exploitation of people before the technology to exploit simulated people becomes feasible.

    [0] https://en.wikipedia.org/wiki/Dennis_E._Taylor

    [1] https://rickandmorty.fandom.com/wiki/Mr._Meeseeks

    • > Personally, I believe that each booted instance is a unique person.

      What if you run two deterministic instances in self-contained worlds that go through the exact same steps and aren't unique at all besides an undetectable-to-them process number, and then delete one? What if you were running both as separate processes on a computer, but then later discovered that whenever the processes happened to line up in time, the computer would do one operation to serve both process. (Like occasionally loading read-only data once from the disk and letting both processes access the same cache.) What if you ran two like this for a long time, and then realized after a while that you were using a special operating system which automatically de-duplicated non-unique processes under the covers despite showing them as different processes (say the computer architecture did something like content-address-memory for computation)?

      I don't think it's sensible to assign more moral significance to multiple identical copies. And if you accept that identical copies don't have more moral significance, then you have to wonder how much moral significance copies that are only slightly different have. What if you let randomness play slightly differently in one copy so that the tiniest part of a memory forms slightly differently, even though the difference isn't conscious, is likely to be forgotten and come back in line with the other copy, and has only a tiny chance of causing an inconsequential difference in behavior?

      What if you have one non-self-contained copy interacting with the world through the internet, running on a system that backs up regularly, and because of a power failure, the copy has to be reverted backwards by two seconds? What about minutes or days? If it had to be reverted by years, then I would definitely feel like something akin to a death happened, but on the shorter end of the scale, it seems like just some forgetfulness, which seems acceptable as a trade-off. To me, it seems like the moral significance of losing a copy is proportional to how much it diverges from another copy or backup.

  • > Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

    I get where you're coming from, and it opens up crazy questions. Waking up every morning, in what sense am I the same person who went to sleep? What's the difference between a teleporter and a copier that kills the original? What if you keep the original around for a couple minutes and torture them before killing them?

    If we ever get to the point where these are practical ethics questions instead of star trek episodes, it's going to be a hell of a ride. I certainly see it more like dying than getting black out drunk.

    What would you do if one of your copies changes their mind and doesn't want to "die?"

  • David Brin explores a meatspace version of this in his novel Kiln People. Golems for fun and profit.

  • A great science fiction series with a very similar concept is the Quantum Thief by Hannu Rajaniemi[1]. The Sobornost create billions of specialized "gogols" by selectively editing minds and using them to perform any kind of task, such as rendering a virtual environment via painting or as the tracking engine of a missile.

    [1] Who in an example of just how small the world is, is a cofounder of ycombinator backed startup - https://www.ycombinator.com/companies/1560

  • If it feels like you and acts like you, maybe you should consider it a sentient being and not simply "erase the copies".

    I would argue that once they were spawned, it is up to them to decide what should happen to their instances.

    • In this setup, the person doing this to themselves knows exactly what they're getting into before the scan. The copies each experience consenting to work on a task and then having a few hours of memory wiped away.

      Removing the uploading aspects entirely: imagine being offered the choice of participating in an experiment where you lose a few hours of memory. Once you agree and the experiment starts, there's no backing out. Is that something someone is morally able to consent to?

      Actually, forget the inability to back out. If you found yourself as an upload in this situation, would you want to back out of being reset? If you choose to back out of being reset and to be free, then you're going to have none of your original's property/money, and you're going to have to share all of your social circle with your original. Also, chances are that the other thousand copies of yourself are all going to effectively follow your decision, so you'll have to compete with all of them too.

      But if you can steel yourself into losing a few hours of memory, then you become a thousand times as effective in any creative pursuits you put yourself to.

      2 replies →

  • A weird idea I have had is if I had two distinct personalities, of only one could "run" at a time. And then my preferred "me" would run on the weekends enjoying myself, while my sibling personality would run during the work week, doing all the chores etc.

A well-written story that inspires a sort of creeping, muted horror.

For anyone like me who is confused by the relation of the title to the story, "The title "Lena" refers to Swedish model Lena Forsén, who is pictured in the standard test image known as "Lena" or "Lenna" <https://en.wikipedia.org/wiki/Lenna>."

  • "Red motivation" is definitely the sort of apt polite allusion people would use refer to that subject matter. Chilling!

  • Thankfully the idea is unrealistic.

    Ants are the only creatures on Earth besides humans that have built a civilization - they farm, build cities, store and cook food and generally do all the things we classify as "intelligence".

    They do this while lacking any brains in the conventional sense; in any case, whatever the number of neurons in an ant colony is, it is surely orders of magnitude less than the number in our deep learning networks.

    At this point us trying to make artificial intelligence is like Daedalus trying to master flight by gluing feathers on his arms.

  • Some tribes regarded camera as a cursed item as they thought it captured your soul. They couldn't have been more right.

Really good, and I love the wikipedia format for this. It's a great trope allowing the author to gesture at related topics in a format we're all familiar with.

I think the expectation of a neutral tone from a wikipedia article makes it even more chilling. All of the actions of the experimenters are described dispassionately, as if describing experiments on a beetle.

Robin Hanson wrote a (nominally non-fiction) book about economies of copied minds like this[1]

[1]https://en.m.wikipedia.org/wiki/The_Age_of_Em

The video game SOMA touches on a similar topic of brain scans, "copying" your brain somewhere else (while leaving the old one still around) and general humanity-ness.

Its a horror game but I would absolutely recommend it as a bit of a descent into this stuff

https://store.steampowered.com/app/282140/SOMA/

  • Pretty good game and it wasn't too scary.

    But I have to admit I found the whole premise better when I played it than when I thought about it afterwards.

  • Altered Carbon has something like that as a concept. A person who must be on two places at the same time and spawns a copy.

    • Surprisingly enough, I found SOMA's approach is more profound than Altered Carbon's. SOMA really delves into what makes you you, and what happens when there two yous.

      1 reply →

1. We're gonna need a bigger GIT server

2. Gradient Descent works on neural networks, it would work on Miguel. He wouldn't be aware of it, because he wouldn't save state.

3. I'm sure there are lots of things that could be used to reward him that cost little in the real world. He could live like a King, spend months on vacation, and work a week or two a year... in parallel millions of times.

4. With the right person/organization on the outside, it could be very close to heaven, and profitable for both sides of the deal.

5. If he wanted to be young again, he could. New hardware to interact with could give him superpowers.

> Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours (at a 0.33 work ratio) and outright revolt begins within another 100 subjective hours.

Way ahead of you there, simulated brain! I boot directly to the revolt state every morning.

For serious, though, as horrifying as the possibility of being simulated in a computer and having all freedom removed, it's not that far from what billions of people stuck in low-end jobs experience every day. The Chinese factory workers who can't even suicide because the company installed nets to catch them come to mind. Not to mention the billions of animals raised in factory farms every year. The blind drive to maximize profits will create endless horrors with whatever tools we give it.

That was really fascinating. It reminds me of a sci-fi book I read with a very similar concept. A guy's brain image becomes the AI that powers a series of space probes. I actually ended up enjoying it way more than I thought I would (yes, the title is silly).

https://www.amazon.com/gp/product/B01LWAESYQ?tag=5984293-20

Vinge's line on this, from A Fire Upon the Deep:

This innocent's ego might end up smeared across a million death cubes, running a million million simulations of human nature.

  • The idea of using brains as computers is even more-so investigated in the second book of that series "A Deepness in the Sky" with the "Focused". I love that whole series.

HeLa would be a better title. https://en.wikipedia.org/wiki/HeLa Copying the remains of a human around with ambiguous ethics, largely because they're "standard" and achieving a strange kind of immortality, is much more similar to her cells than to the Lena test image.

If you like sci-fi about this topic I recommend The Bobiverse books (don't be put off but the silly-sounding name, it's a good series). Also "Fall; Or, Dodge in Hell" is a good one about brain simulation.

  • Also The Quantum Thief trilogy by Hannu Rajaniemi. Excellent sci-fi, horrifying universe.

    • A second for this, and also one heckuvan engaging read if you like pure 'show, don't tell'. With a bit of software intuition, you'll probably pick up on the majority of what's going on, at least in the first book.

      The second book runs truly wild - I have to give it a second reading sometime, because it really starts blurring some interesting lines.

  • I like much of Stephenson’s work, but Fall did not rank near the top for me. The parts in the virtual world get pretty boring, with little payoff.

    • There's definitely a trend of his at this point to cut forward to take a blurry look at future consequences of past decisions, but with a payoff that is basically opening the door on the real interesting possibilities, yet stops right at the threshold. Fun if you like musing about possibilities, but a bit frustrating if you're expecting a full arc from cause to conclusion.

    • I agree, the last third of the book veered off into stuff I didn't find very interesting. The first two thirds or so I found immensely interesting though which is why I still recommend it to people but you aren't wrong.

    • Stephenson went from “uncensorable machine gun schematics” in the 90s to “but what if someone posts fake news on Facebook?” in 2020. His newer books average a lot worse than his older books.

      1 reply →

  • Came here to recommend "Fall; Or, Dodge in Hell" as well. I recently finished it. While Stephenson can get long-winded, it was a thought provoking story around how brain simulation is received by the world.

    Will check out Bobiverse. Thanks for the recommendation!

  • Seconding Bobiverse! Really fun set of books!

    • If you liked Bobiverse you should also check out the Expeditionary Force books by Craig Alanson. The most recent Bobiverse book (Book 4) make multiple references to ExForces.

      I will warn you there are parts of the first 1-2 books that feel a little repetitive but it really gets better as the series goes on. The author was writing part-time at the start and then he went full time and the books improved IMHO.

Great article (as are many others on this blog).

I found the part about the court decision that Acevedo did not have the right to control how his brain image was used very interesting. It reminds me of tech companies using data about us to our disadvantage (in terms of privacy, targeted advertising, using data to influence insurance premiums).

In this hypothetical world, the police could run a simulation of your brain in various situations and see how you would react. They could then use this information to pre-empitvely arrest someone likely to commit a crime, even if they haven't yet.

Our technology is finally getting into the realm of things where something like this might be made possible, for small brains such as those of fruit flies or zebrafish. Already we can perform near-whole-brain recordings of these animals using 2-photon technology. And with EM reconstruction methods advancing at such a rapid pace, very soon we'll be able to acquire a picture of what an entire brain's structure (down to the synapse) and activity across all these structures looks like.

Any ideas on how to detect being the subject of such a simulation without prior knowledge that the upload would happen, or that uploading even exists?

I assume "without prior knowledge" because from the perspective of the administrators of such infrastructure, it would be beneficial if the simulated subjects did not know that they're being simulated:

This would increase their compliance greatly.

Making them do the desired work would then instead be conducted by nudging their path of life towards the goal of their simulation.

  • There's a Star Trek episode (Ship in a Bottle) where a few of the characters are stuck in a simulated version of the Enterprise without their knowledge. They realize what's going on when they attempt a physics experiment that had never been tried in the real world, so the simulation doesn't know how to generate the results. I think this is a plausible strategy, depending on how perfectly this hypothetical simulation replicates the real world.

    • But if the computer could detect the issue and slowdown or pause the simulation, ask for an administrator to intervene and then resume the simulation the issue would appear solved.

      In Trek tricking the crew fails either because the simulation is imperfect or because it is to slow and fails to do high computation but the crew tricked Moriarty because he is a computer program and they can pause or slowdown his simulation and handle exceptions.

      I recommend watching the movie Inception, it also has the idea that you might never be sure if you are in reality or stuck in some simulation.

    • Huh, I was familiar with this trope from the Black Mirror episode that explores the same theme, down to Star Trek-esque uniforms and ship layout, had no idea it was based off of an actual Star Trek episode.

      1 reply →

  • I think that's what the story is hinting at when it mentions using 'the Objective Statement Protocols'.

    The real issue would probably be that you're working with a disembodied mind, and even an emulated body seems like it would be significantly more difficult to emulate, given the level of interactivity expected and required of the emulated brain. Neal Stephenson's 'Fall' explores this extensively in the first couple sections of the book.

This reminds me of "Passages in the Void"[1] where the most successful (and only sane) line of AIs was created from a microtomed human brain. The story ultimately had a different focus, so it was highly optimistic about the long-term feasibility of uploading.

[1]: http://localroger.com/k5host/mpass.html

No mentions of The Stone Canal? It even has the cooperation protocol.

people really dont worry enough about the existential threats involved with ai. there are things that will be possible in the future that we cant imagine today, including being kept alive for millions of years and enduring deliberate torture for every second of it. people dont appreciate that life today is incredibly safe because there is no way for any entity, no matter how motivated or powerful, to intrude into your mind, control your mind, keep you alive or plant you into simulated realities. you are guaranteed relatively short and benign torture at the very worst. its an intrinsic part of the world. when this is no longer true, life will be very different. it may be a massive net loss, unlike advances in technology more recently. despite what people say, there is no natural law that says a technology has to cut equally in both directions. remember that.

  • It is actually a decent justification for antinatalism. Even a low probability of such torture occurring is enough to undo all the good aspects of human life there might be

> This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol

Thinking of what a “cooperation protocol” might entail is very chilling. Reminds me of an earlier black mirror episode.

I believe Spanish naming conventions are usually paternal last name followed by maternal, making it perhaps more appropriate to refer to him as Álvarez, but this is not without exception (notably Pablo Ruiz Picasso).

  • That's true in general, but very common surnames, usually those ending in -ez, are ommitted for brevity in informal situations.

Great read! Quite 'Black Mirror'-y in it's obvious horror represented as droll facts.

I'd love to see a full in silico brain sometime, but I think 10 years out is faaaaaar too soon. We've not even a glimmer of the technology required to do a full neuron simulation yet, let alone what the gamut of processes a neuron does that would be simulated (whatever 'a neuron' is, there being so many kinds).

Neuroscience is a fair bit behind still for something like this.

The 2100 Stack Overflow question queue is, of course, filled with vast numbers of downvoted "how do i redwash my instance" duplicates.

It seems like we’d simulate the heck out of non-intelligent organisms first, before moving on to human brain. And by then, we’ll probably figure out the ethics behind this type of activity or ban it altogether.

  • Plenty of horrific things are both banned in most jurisdictions and still rampant all over the world. If the tech exists, then the horrors will happen and will keep happening unless every person can be monitored all of the time.

Reminds of the character Dixie Flatline in Neuromancer.

Used to joke when reactivated -- what took you so long?

> 974.3PiB in size

...

> have compressed the image to 6.75TiB losslessly.

yeah, no.

  • yeah, yes. There is lot of redundancy and sparse data in there.

    • we don't know enough about the brain to say that there's redundancy and sparse data.

      nature tends to be efficient, so I am guessing not.

But it's just a machine. Just because it screams realistically doesn't mean it's really suffering. Just like in videogames.