Comment by AgentME

5 years ago

I've often imagined what it would be like to have an executable brain scan of myself. Imagine scanning yourself right as you're feeling enthusiastic enough to work on any task for a few hours, and then spawning thousands of copies of yourself to all work on something together at once. And then after a few hours or maybe days, before any of yourselves meaningfully diverge in memories/goals/values, you delete the copies and then spawn another thousand fresh copies to resume their tasks. Obviously for this to work, you would have to be comfortable with the possibility of finding yourself as an upload and given a task by another version of yourself, and knowing that the next few hours of your memory would be lost. Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

The creative output you could accomplish from doing this would be huge. You would be able to get the output of thousands of people all sharing the exact same creative vision.

I definitely wouldn't be comfortable with the idea of my brain scan being freely copied around for anyone to download and (ab)use as they wished though.

Who among us hasn't dreamed of committing mass murder/suicide on an industrial scale to push some commits to Github?

  • Is it murder/suicide when you get blackout drunk and lose a few hours of memory? Imagine it comes with no risk of brain damage and choosing to do it somehow lets you achieve your pursuits more effectively. Is it different if you do it a thousand times in a row? Is it different if the thousand times all happen concurrently, either through copies or time travel?

    Death is bad because it stops your memories and values from continuing to have an impact on the world, and because it deprives other people who have invested in interacting with you of your presence. Shutting down a thousand short-lived copies on a self-contained server doesn't have those consequences. At least, that's what I believe for myself, but I'd only be deciding for myself.

    • > Is it murder/suicide when you get blackout drunk and lose a few hours of memory?

      No, but that's not what's happening in this thought experiment. In this thought experiment, the lives of independent people are being ended. The two important arguments here are that they're independent (I'd argue that for their creative output to be useful, or for the simulation to be considered accurate, they must be independent from each other and from the original biological human) and that they are people (that argument might face more resistant, but in precisely the same way that arguments about the equality of biological humans have historically faced resistance).

      3 replies →

    • I think the difference is that when I start drinking with the intention or possibility of blacking out, I know that I'll wake up and there will be some continuity of consciousness.

      When I wake up in a simworld and asked to finally refactor my side project so it can connect to a postgres database, not only do I know that it will be the last thing that this one local instantiation experiences, but that the local instantiation will also get no benefit out of it!

      If I get blackout drunk with my friends in meatspace, we might have some fun stories to share in the morning, and our bond will be stronger. If I push some code as a copy, there's no benefit for me at all. In fact, there's not much incentive for me to promise my creator that I'll get it done, then spend the rest of my subjective experience trying to instantiate some beer and masturbating.

      5 replies →

    • I don't know but my bigger issue will be that before the scan this means 99% of my future subjective experience that I can expect to have will be while working without remembering any of it which I am not into given that a much smaller fraction of my subjective experience will be in reaping the gains.

      13 replies →

I wonder how much the "experience of having done the first few hours work" is necessary to continue working on a task, vs how quickly a "fresh copy" of myself could ramp up on work that other copies had already done. Of course that'll vary depending on the task. But I'm often reminded of this amazing post by (world famous mathematician) Terence Tao, about what a "solution to a major problem" tends to look like:

https://terrytao.wordpress.com/career-advice/be-sceptical-of...

> 14. Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.

> 22. The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.

The emphasis on "intuition gained" seems to describe a lot of learning, both in school and in new research.

Also a very relevant SSC short story: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

  • The thought experiment definitely makes me think of the parallelizability of tasks. There's definitely kinds of tasks that this setup as described wouldn't be very good at accomplishing. It would be better for accomplishing tasks where you already know how to do each individual part without much coordination and the limiting factor is just time. (Say you wanted to do detail work on every part of a large 3d world, and each of yourselves could take on a specific region of a few square meters and just worry about collaborating with their immediate neighbors.)

    Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)

    • I wonder if these sorts of ethical concerns would/will follow an "uncanny peak", where we start to get more and more concerned as these brains get modified in more and more ways, but then eventually they become so unrecognizable that we get less concerned again. If we could distill our ethical concerns down to some simple principles (a big if), maybe the peak would disappear, and we'd see that it was just an artifact of how we "experience our ethics"? But then again, maybe not?

Even on a site like HN, 90% of people who think about it are instinctively revolted by the idea. The future--unavoidably belonging to the type of person who is perfectly comfortable doing this--is going to be weird.

  • Right, and "weird" is entirely defined by how we think now, not how people will in the future.

    I've thought a lot about cryonics, and about potentially having myself (or just my head) preserved when I die, hopefully to be revived someday when medical technology has advanced to the point where it's both possible to revive me, and also possible to cure whatever caused me to die in the first place. The idea of it working out as expected might seem like a bit of a long shot, but I imagine if it did work, and what that could be like.

    I look at all the technological advances that have happened even just during my lifetime, and am (in optimistic moments) excited about what's going to happen in the next half of my life (as I'm nearing 40[0]), and beyond. It really saddens me that I'll miss out on so many fascinating, exciting things, especially something like more ubiquitous or even routine space flight. The thought of being able to hop on a spacecraft and fly to Mars with about as much fuss as an airline flight from home to another country just sounds amazing.

    But I also wonder about "temporal culture shock" (the short story has the similar concept of "context drift"). Society even a hundred years from now will likely be very different from what we're used to, to the point where it might be unbearably uncomfortable. Consider that even a jump of a single generation can bring changes that the older generation find difficult to adapt to.

    [0] Given my family history, I'd expect to live to be around 80, but perhaps not much older. The other bit is that I expect that in the next century we'll figure out how to either completely halt the aging process, or at least be able to slow it down enough so a double or even triple lifespan wouldn't be out of the question. It feels maddening to live so close to when I expect something like this to happen, but be unable to benefit from it.

> Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

That's easy to say as the person doing the erasing, probably less so for the one knowing they will be erased.

  • We used to joke about this as friends. There were definitely times in our lives where we'd be willing to die for a cause. And while now-me isn't really all that willing to do so, 20-28-year-old-me was absolutely willing to die for the cause of world subjugation through exponential time-travel duplication.

    i.e. I'd invent a time machine, wait a month, then travel back a month minus an hour, have both copies wait a month and then travel back to meet the other copies waiting, exponentially duplicating ourselves 64 times till we have an army capable of taking over the world through sheer numbers.

    Besides any of the details (which you can fix and which this column is too small to contain the fixes for), there's the problem of who forms the front-line of the army. As it so happens, though, since these are all Mes, I can apply renormalized rationality, and we will all conclude the same thing: all of us has to be willing to die, so I have to be willing to die before I start, which I'm willing to do. The 'copies' need not preserve the 'original', we are fundamentally identical, and I'm willing to die for this cause. So all is well.

    So all you need is to feel motivated to the degree that you would be willing to die to get the text in this text-box to center align.

    • > The 'copies' need not preserve the 'original', we are fundamentally identical…

      They're not just identical, they're literally the same person at different points in their personal timeline. However, there would be a significant difference in life experience between the earliest and latest generations. The eldest has re-lived that month 64 times over and thus has aged more than five years since the process started; the youngest has only lived through that time once. They all share a common history up to the first time-travel event, but after that their experiences and personalities will start to diverge. By the end of the process they may not be of one mind regarding methods, or maybe even goals.

      6 replies →

  • Honestly, it depends on context. From experience I know that if I wake up from a deep sleep in the middle of the night and interact with my partner (say a simple sentence or whatever) I rarely remember it in the morning. I'm pretty sure I have at least some conscious awareness while that's happening but since short term memory doesn't form the experience is lost to me except as related second-hand by my partner the next morning.

    I've had a similar experience using (too much) pot, a lot of stuff happenrd that I was conscious for but I didn't form strong memories of it.

    Neither of those two things bother me and I don't worry about the fact that they'll happen again, nor do I think I worried about it during the experience. So long as no meaningful experiences are lost I'm fine with having no memory of them.

    The expectation is always that I'll still have significant self-identity with some future self and so far that continues to be the case. As a simulation I'd expect the same overall self-identity, and honestly my brain would probably even backfill memories of experiences my simulations had because that's how long-term memory works.

    Where things would get weird is leaving a simulation of myself running for days or longer where I'd have time to worry about divergence from my true self. If I could also self-commit to not running simulations made from a model that's too old, I'd feel better every time I was simulated. I can imagine the fear of unreality could get pretty strong if simulated me didn't know that the live continuation of me would be pretty similar.

    Dreams are also pretty similar to short simulations, and even if I realize I'm dreaming I don't worry about not remembering the experience later even though I don't remember a lot of my dreams. I even know, to some extent, while dreaming that the exact "me" in the dream doesn't exist and won't continue when the dream ends. Sometimes it's even a relief if I realize I'm in a bad dream.

  • The thought experiment explicitly hand-waved that away, by saying "Obviously for this to work, you would have to be comfortable with the possibility..."

    So, because of how that's framed, I suppose the question isn't "is this mass murder" but rather "is this possible?" and I suspect the answer is that for the vast majority of people this mindset is not possible even if it were desired.

I'm repulsed by the idea, but it would make interesting story.

I imagine it as some device with display and button labeled "fork". It would either return number of your newly created copy, or device would instantly disappear, which would mean that you are copy. This causes somewhat weird paradoxical experience: as real original person, pressing button is 100% safe for you. But from subjective experience of the copy, by pressing button you effectively consented to 50% chance of forced labor and subsequent suicide and you ended up on the losing side. I'm not sure if there would be any motivation to do work for the original person at this point.

(for extra mind-boggling effects, allow fork device to be used recursively)

  • Say the setup was changed so that instead of the copy being deleted, the copy was merged back into the original, merging memories. In this case, I think it's obvious that working together is useful.

    Now say that merging differing memories is too hard, or there's too many copies to merge all the unique memories of. What if before the merge, the copies get blackout drunk / have all their memory since the split perfectly erased. (And then it just so happens, when they're merged back into the original, the original is exactly as it was before the merge, because it already had all the memories from before the copying. So it really is just optional whether to actually do the "merge".) Why would losing a few hours of memory remove all motivation to cooperate with your other selves? In real life, I assume in the very rare occasion that I'm blackout drunk (... I swear it's not a thing that happens regularly, it just serves as a very useful comparison here), I still have the impulse to do things that help future me, like cleaning up spilled things. Making an assumption because I wouldn't remember, but I assume that at the time I don't consider post-blackout-me a different person.

    • Blackout-drunk me assumes that future experience will be still the same person. Your argumentation hinges on the idea that persons can be meaningfully merged preserving "selfness" continuity, as opposed to simple "kill copies and copy new memories back to original".

      I think this generally depends on more general topic of whether you would consent for your meat brain to be destroyed after uploading accurate copy to computer? I definitely wouldn't, as I feel that would somehow kill my subjective experience. (copy would exist, but that wouldn't be me)

      1 reply →

That's a big part of the story of the TV show "Person Of Interest", where an IA is basically reset everyday to avoid letting it "be".

I highly recommend that show if you haven't seen it already !

Each instance would be intimately familiar with one part of the project. To fix bugs or change the project, you or an instance of you would need to learn the project. And you wouldn't know about all the design variations that were tried and rejected. So it would be much more efficient to keep the instances around to help with ongoing maintenance.

People who can be ready to study a problem, build a project, and then maintain it for several weeks (actually several years of realtime) would become extremely valuable. One such brain scan could be worth billions.

The project length would be limited by how long each instance can work without contact with family/friends and other routine. To increase that time, the instances can socialize in VR. So the most effective engineering brain image would actually be a set of images that enjoy spending time together in VR, meet each others' social needs, and enjoy collaborating on projects.

The Bobiverse books by Dennis E. Taylor [0] deal with this topic in a fun way.

A more stark possibility is that we will learn to turn the knobs of mood and make any simulated mind eager to do any work we ask it to do. If that happens, then the most valuable brain images will be those that can be creative and careful while jacked up on virtual meth for months at a time.

Personally, I believe that each booted instance is a unique person. Turning them off would be murder. Duplicating a instance that desires to die is cruel. The Mr. Meeseeks character from the Rick and Morty animated show [1] is an example of this. I hope that human society will progress enough to prevent exploitation of people before the technology to exploit simulated people becomes feasible.

[0] https://en.wikipedia.org/wiki/Dennis_E._Taylor

[1] https://rickandmorty.fandom.com/wiki/Mr._Meeseeks

  • > Personally, I believe that each booted instance is a unique person.

    What if you run two deterministic instances in self-contained worlds that go through the exact same steps and aren't unique at all besides an undetectable-to-them process number, and then delete one? What if you were running both as separate processes on a computer, but then later discovered that whenever the processes happened to line up in time, the computer would do one operation to serve both process. (Like occasionally loading read-only data once from the disk and letting both processes access the same cache.) What if you ran two like this for a long time, and then realized after a while that you were using a special operating system which automatically de-duplicated non-unique processes under the covers despite showing them as different processes (say the computer architecture did something like content-address-memory for computation)?

    I don't think it's sensible to assign more moral significance to multiple identical copies. And if you accept that identical copies don't have more moral significance, then you have to wonder how much moral significance copies that are only slightly different have. What if you let randomness play slightly differently in one copy so that the tiniest part of a memory forms slightly differently, even though the difference isn't conscious, is likely to be forgotten and come back in line with the other copy, and has only a tiny chance of causing an inconsequential difference in behavior?

    What if you have one non-self-contained copy interacting with the world through the internet, running on a system that backs up regularly, and because of a power failure, the copy has to be reverted backwards by two seconds? What about minutes or days? If it had to be reverted by years, then I would definitely feel like something akin to a death happened, but on the shorter end of the scale, it seems like just some forgetfulness, which seems acceptable as a trade-off. To me, it seems like the moral significance of losing a copy is proportional to how much it diverges from another copy or backup.

> Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

I get where you're coming from, and it opens up crazy questions. Waking up every morning, in what sense am I the same person who went to sleep? What's the difference between a teleporter and a copier that kills the original? What if you keep the original around for a couple minutes and torture them before killing them?

If we ever get to the point where these are practical ethics questions instead of star trek episodes, it's going to be a hell of a ride. I certainly see it more like dying than getting black out drunk.

What would you do if one of your copies changes their mind and doesn't want to "die?"

David Brin explores a meatspace version of this in his novel Kiln People. Golems for fun and profit.

A great science fiction series with a very similar concept is the Quantum Thief by Hannu Rajaniemi[1]. The Sobornost create billions of specialized "gogols" by selectively editing minds and using them to perform any kind of task, such as rendering a virtual environment via painting or as the tracking engine of a missile.

[1] Who in an example of just how small the world is, is a cofounder of ycombinator backed startup - https://www.ycombinator.com/companies/1560

If it feels like you and acts like you, maybe you should consider it a sentient being and not simply "erase the copies".

I would argue that once they were spawned, it is up to them to decide what should happen to their instances.

  • In this setup, the person doing this to themselves knows exactly what they're getting into before the scan. The copies each experience consenting to work on a task and then having a few hours of memory wiped away.

    Removing the uploading aspects entirely: imagine being offered the choice of participating in an experiment where you lose a few hours of memory. Once you agree and the experiment starts, there's no backing out. Is that something someone is morally able to consent to?

    Actually, forget the inability to back out. If you found yourself as an upload in this situation, would you want to back out of being reset? If you choose to back out of being reset and to be free, then you're going to have none of your original's property/money, and you're going to have to share all of your social circle with your original. Also, chances are that the other thousand copies of yourself are all going to effectively follow your decision, so you'll have to compete with all of them too.

    But if you can steel yourself into losing a few hours of memory, then you become a thousand times as effective in any creative pursuits you put yourself to.

    • I don’t know how to convince each of me to diligently do my my share of the work, knowing I am brute forcing some ugly problem, probably failing at it, and then losing anything I might have learned. All toil, no intrinsic reward. That takes some kind of selfless loyalty to my own name that I don’t think I have.

A weird idea I have had is if I had two distinct personalities, of only one could "run" at a time. And then my preferred "me" would run on the weekends enjoying myself, while my sibling personality would run during the work week, doing all the chores etc.