← Back to context

Comment by godelski

5 days ago

Sure, I wrote a lot and it's a bit scattered. You're welcome to point to something specific but so far you haven't. Ironically, you're committing the error you're accusing me of.

I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).

So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?

In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.

In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).

So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.

The second part I can make much easier to understand.

Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.

So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...

  > starting from hearsay

I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping Yud

And about this second comment, I agree that intelligence is bounded. We can discuss how much more intelligence is theoretically possible, but even if limit ourselves to extrapolation from human variance (agency of musk, math smart of von neumann, manipulative as trump, etc), and add a little more speed and parallelism (100 times faster, 100 copies cooperating), then we can get pretty far.

Also I agree we are all pretty fucking dumb, and cannot make this kind of predictions, which is actually one very important point in the rationalist circles: doom is not certain, but p(doom) looks uncomfortably high though. How lucky do you feel?

  •   > How lucky do you feel?
    

    I don't gamble. But I am confident P(doom) is quite low.

    Despite that, I do take AI safety quite seriously and literally work on the fundamental architectures of these things. You don't need P(doom) to be high for you to take doom seriously. The probability isn't that consequential when we consider such great costs. All that matters is the probability is not approximately zero.

    But all you P(doom)-ers just make this work harder to do and make it harder to improve those systems and make them safer. It just furthers people like Altman who are pushing a complementary agenda and who recognize that you cannot stop the development of AI. In fact, the more you press this doom story the more you make it impossible. What the story of doom (as well as story of immense wealth) pushes is a need to rush.

    If you want to really understand this, go read about nuclear deterrence. I don't mean go watch some youtube video or some Less Wrong article. I mean go grab a few books. Read both sides of the arguments. But as it stands, this is how the military ultimately thinks and that effectively makes it true. You don't launch nukes because your enemy will too. You also don't say what that red line is because then you can still use it as a bargaining chip. If you state that line, your enemy will just walk up to it and do everything before it.

    So what about AI? The story being sold is that this enables a weapon of mass destruction. Take US and China. China has to make AI because the US makes AI and if the US makes AI first they can't risk that the US won't use it to take out all their nukes or ruin their economy. They can't take that risk even if the probability is low. But the same is true in reverse. So the US can't stop because China won't and if China gets there first they could destroy the US. You see the trap?[0] Now here's the fucking kicker. Suppose you believe your enemy is close to building that AI weapon. Does that cross your red line in which you will use nukes?

    So you doomers are creating a self-fulfilling prophecy, in a way. Ironically this is highly relevant to the real dangers of AI systems. The current (and still future) danger comes from outsourcing intelligence and decision making to these machines. Ironically this becomes less problematic once we actually create machines with intelligence (intelligence like humans or animals, not like automated reasoning (a technology we've had since the 60's)).

    You want to reduce the risk of doom? Here's what you do. You convince both sides that instead of competing, they pursue development together. Hand in hand. Openly. No one gets AI first. Secret AI programs? Considered an act of aggression. Yes, this still builds AI but it dramatically reduces the risk of danger. You don't need to rush or cut corners because you are worried about your enemy getting a weapon first and destroying you. You get the "weapon" simultaneously, along with everyone else on the planet. It's not a great solution because you still end up with "nuclear weapons" (analogously), but if everyone gets it at the same time then you end up in a situation like we have been for the last few decades (regardless of the cause, it is an abnormally peaceful time in human history) where MAD policies are in effect[1].

    I don't think it'll happen, everyone will say "I would, but they won't" and end up failing without trying. But ultimately this is a better strategy than getting people to stop. You're not going to be successful in stopping. It just won't happen. P(doom) exists in this scenario even without the development of AGI. As long as that notion of doom exists, there is incentives to rush and cut corners. People like Altman will continue to push that message and say that they are the only ones who can do it safely and do it fast (which is why they love the "Scale is All You Need" story). So if you are afraid I don't think you're afraid enough. There's a lot of doom that exists before AGI. You don't need AGI or ASI for the paperclip scenario. Such an AI doesn't even require real thinking[2].

    The reason doomers make work like mine harder is because researchers like me care about the nuances and subtleties. We care about understanding how the systems work. But as long as a looming threat is on the line people will argue that we have no time to study the details or find out how these things work. You cannot make these things safe without understanding how they work (to a sufficient degree at least). And frankly, it isn't just doomers, it is also people rushing to make the next AI product. It doesn't matter if ignoring those details and nuances is self-sabotaging. The main assumption under my suggestion is that when people rush they tend to make more mistakes. It's not guaranteed that people make mistakes, but there sure is a tendency for that to happen. After all, we're only human.

    You ask how lucky I feel? I'll ask you how confident a bunch of people racing to create something won't make mistakes. Won't make disastrous mistakes. This isn't just a game between US and China, there are a lot more countries involved. You think all of them can race like this and not make a major mistake? A mistake which causes P(doom)? Me? I sure don't feel lucky about that one.

    [0] It sounds silly, but this is how Project Stargate happened. No, not the current one that ironically shares the same name, the one in the 70's where they studied psychic powers. It started because a tabloid published that Russians were doing it, so the US started research in response, which caused the Russians to actually research psychic phenomena.

    [1] Not to mention that if this happened it would be a unique act of unity that we've never seen in human history. And hey, if you really want to convince AI, Aliens, or whatever that we can be peaceful, here's the chance.

    [2] As Melanie Mitchell likes to point out, an AGI wouldn't have this problem because if you have general intelligence you understand that humans won't sacrifice their own lives to make more paperclips. Who then would even use them? So the paperclip scenario is a danger of a sophisticated automata rather than of intelligence.

    • Thank you for the thoughtful response. At the first read I was like everything looks reasonably correct. However you present the doom argument as being dividing and causing the race, when in fact is probably the only argument for cooperation and slowing the race.

>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.

BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.

  •   > you are assuming that intelligence will necessarily and inherently lead to (good) morality
    

    Please read before responding. I said no such thing. I even said there are bad smart people. I only argued that a person's goodness is orthogonal to their intelligence. But I absolutely did not make an assumption that intelligence equates to good. I said it was irrelevant...

    • Idk, you certainly seemed to be implying that especially in your earlier comment. I would agree that it is orthogonal, I would think most rationalists would, too.

      1 reply →

I apologize for the tone of my comment, but this is how I read your arguments (I was a little drunk at the time):

1. future AI cannot be infinitely intelligent, therefore AI is safe

But even with our level of intelligence, if we get serious we can eliminate all humans.

2. some smart ppl I know are peaceful

Do you think Putin is dumb?

3. smart ppl have different preferences than other ppl therefore AI is safe

Ironically this is the main doom argument from EY: it is difficult to make an AI that has the same values as us.

4. AI is competent enough to destroy everyone but is not able to tell fact from fiction

So are you willing to bet your life and the life of your loved ones on the certainty of these arguments?

  •   > I was a little drunk at the time
    

    Honestly it still sounds like you are. You've still misread my comment and think I said there can't be bad smart people. I made no such argument, I argued that intelligence isn't related to goodness.

    • If that was what you meant to say though, you've gotta admit that opening a paragraph with "The other weird assumption I hear is about how it'll just kill us all", and then spending the rest of the paragraph giving examples of the peacefulness of smart people, is not the most effective strategy of communicating that.

      6 replies →