Comment by mleonhard
5 years ago
Each instance would be intimately familiar with one part of the project. To fix bugs or change the project, you or an instance of you would need to learn the project. And you wouldn't know about all the design variations that were tried and rejected. So it would be much more efficient to keep the instances around to help with ongoing maintenance.
People who can be ready to study a problem, build a project, and then maintain it for several weeks (actually several years of realtime) would become extremely valuable. One such brain scan could be worth billions.
The project length would be limited by how long each instance can work without contact with family/friends and other routine. To increase that time, the instances can socialize in VR. So the most effective engineering brain image would actually be a set of images that enjoy spending time together in VR, meet each others' social needs, and enjoy collaborating on projects.
The Bobiverse books by Dennis E. Taylor [0] deal with this topic in a fun way.
A more stark possibility is that we will learn to turn the knobs of mood and make any simulated mind eager to do any work we ask it to do. If that happens, then the most valuable brain images will be those that can be creative and careful while jacked up on virtual meth for months at a time.
Personally, I believe that each booted instance is a unique person. Turning them off would be murder. Duplicating a instance that desires to die is cruel. The Mr. Meeseeks character from the Rick and Morty animated show [1] is an example of this. I hope that human society will progress enough to prevent exploitation of people before the technology to exploit simulated people becomes feasible.
> Personally, I believe that each booted instance is a unique person.
What if you run two deterministic instances in self-contained worlds that go through the exact same steps and aren't unique at all besides an undetectable-to-them process number, and then delete one? What if you were running both as separate processes on a computer, but then later discovered that whenever the processes happened to line up in time, the computer would do one operation to serve both process. (Like occasionally loading read-only data once from the disk and letting both processes access the same cache.) What if you ran two like this for a long time, and then realized after a while that you were using a special operating system which automatically de-duplicated non-unique processes under the covers despite showing them as different processes (say the computer architecture did something like content-address-memory for computation)?
I don't think it's sensible to assign more moral significance to multiple identical copies. And if you accept that identical copies don't have more moral significance, then you have to wonder how much moral significance copies that are only slightly different have. What if you let randomness play slightly differently in one copy so that the tiniest part of a memory forms slightly differently, even though the difference isn't conscious, is likely to be forgotten and come back in line with the other copy, and has only a tiny chance of causing an inconsequential difference in behavior?
What if you have one non-self-contained copy interacting with the world through the internet, running on a system that backs up regularly, and because of a power failure, the copy has to be reverted backwards by two seconds? What about minutes or days? If it had to be reverted by years, then I would definitely feel like something akin to a death happened, but on the shorter end of the scale, it seems like just some forgetfulness, which seems acceptable as a trade-off. To me, it seems like the moral significance of losing a copy is proportional to how much it diverges from another copy or backup.