← Back to context

Comment by stillpointlab

6 months ago

As the other commenter noted, you are simply wrong about that. We control the effects the tide has on us, not the tide itself.

But let me offer you a false dichotomy for the purposes of argument:

1. You spend your efforts preventing the emergence of AI

2. You spend your efforts creating conditions for the harmonious co-existence of AI and humanity

It's your choice.

As things stand, 2 is impossible without 1. There simply is not enough time to figure out safe coexistence. These are not projects of equal difficulty- 1 is enormously easier than 2. And 1 is still a global effort!

  • You have no evidence for any of your claims (either for "impossibility" or degree of difficulty) and I strongly doubt your rationalization will stand the test of validation in reality.

    You are also completely moving the goal posts. My original comment was about the hubris of man to prevent processes that operate at a scale beyond his means. The processes that are driving forward the march towards AI are beyond your ability to stop. And now you are arguing (again, with no evidence) the relative difficulty of slowing it down (a much weaker claim compared to stopping it) vs. contributing to safe co-existence.

    But in the interest of finding some common ground let me point out: attempting to slow it down is actually getting on board to my project (although, in a way I think is ineffective). It starts with accepting that it can't be prevented and choosing a way to contribute to safe coexistence by providing enough time to figure it out.

    • Man's scale is Earth.

      You know, I think you have no evidence for any of your claims of "impossibility" either. And I'd argue there's a ton of counterevidence where man, completely ignoring how impossible that's supposed to be, effects change on a global scale.

      You're comparing two dissimilar things. On the one hand slowing it down (which contrary to your claim that I'm moving the goalpost, is at sufficient investment effectively equal to stopping it), on the other, "contributing" to safe co-existence, which is trivially achieved by literally doing anything. I'm telling you that if we merely "contribute" to safe co-existence, we all die. The standard, and it really is the standard in any other field, is proving safe coexistence to a reasonable standard. Which should hopefully make clear where the difficulty lies: we have nothing. Even with all the interpretability research, and I'm not slagging interpretability, this field is in its absolute infancy.

      "It can't be prevented" simply erases the most important distinction: if we get ASI tomorrow, we're in a fundamentally different position than if we get ASI in 50 years after a heroic global effort to work out safety, interpretability, guidance and morality.

      2 replies →