Comment by oconnor663

5 years ago

I wonder how much the "experience of having done the first few hours work" is necessary to continue working on a task, vs how quickly a "fresh copy" of myself could ramp up on work that other copies had already done. Of course that'll vary depending on the task. But I'm often reminded of this amazing post by (world famous mathematician) Terence Tao, about what a "solution to a major problem" tends to look like:

https://terrytao.wordpress.com/career-advice/be-sceptical-of...

> 14. Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.

> 22. The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.

The emphasis on "intuition gained" seems to describe a lot of learning, both in school and in new research.

Also a very relevant SSC short story: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

The thought experiment definitely makes me think of the parallelizability of tasks. There's definitely kinds of tasks that this setup as described wouldn't be very good at accomplishing. It would be better for accomplishing tasks where you already know how to do each individual part without much coordination and the limiting factor is just time. (Say you wanted to do detail work on every part of a large 3d world, and each of yourselves could take on a specific region of a few square meters and just worry about collaborating with their immediate neighbors.)

Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)

  • I wonder if these sorts of ethical concerns would/will follow an "uncanny peak", where we start to get more and more concerned as these brains get modified in more and more ways, but then eventually they become so unrecognizable that we get less concerned again. If we could distill our ethical concerns down to some simple principles (a big if), maybe the peak would disappear, and we'd see that it was just an artifact of how we "experience our ethics"? But then again, maybe not?