← Back to context

Comment by randomwalker

9 months ago

I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.

The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.

The assumption of status quo or equilibrium with technology that is already growing faster than we can keep up with seems irrational to me.

Or, put another way:

https://youtu.be/0oBx7Jg4m-o

  • We do not assume a status quo or equilibrium, which will hopefully be clear upon reading the paper. That's not what normal technology means.

    Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.

    We also say in the introduction:

    "The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."

    • My point was that you’re comparing this to other advances in human evolution, where people either remain essentially the same (status quo), but with more technology that changes how we live, or that technology will advance significantly, but to a level that we coexist with it, such that we live in some Star Trek normal (equilibrium). But, neither of these are likely with a superintelligence.

      We polluted. We destroyed rainforests. We developed nuclear weapons. We created harmful biological agents. We brought our species closer to extinction. We’ve survived our own stupidity so far, so we assume we can continue to control AI, but it continues to evolve into something we don’t fully understand. It already exceeds our intelligence in some ways.

      Why do you think we can control it? Why do you think it is just another technological revolution? History proves that one intelligent species can dominate the others, and that species are wiped out from large change events. Introducing new superintelligent beings to our planet is a great way to introduce a great risk to our species. They may keep us as pets just in case we are of value in some way in the future, but what other use are we? They owe us nothing. What you’re seeing a rise of is not just technology- it’s our replacement or our zookeeper.

      I interact with LLMs most of each day now. They’re not sentient, but I talk to them as if they are equals. With the advancements in past months, I think they’ll have no need of my experience in several years at current rate. That’s just my job, though. Hopefully, I’ll survive off of what I’ve saved.

      But, you’re doing no favor to humanity by supporting a position that assumes we’re capable of acting as gods over something that will exceed our human capabilities. This isn’t some sci-fi show. The dinosaurs died off, and I bet right before they did they were like, “Man, this is great! We totally rule!”

      1 reply →

    • This is very important. A normal process of adaptation will work for AI. We don't need catastrophism.

      I was saying things along these lines in 2023-2024 on Twitter. I'm glad that someone with more influence is doing it now.