Comment by bondarchuk
4 days ago
>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.
BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.
Please read before responding. I said no such thing. I even said there are bad smart people. I only argued that a person's goodness is orthogonal to their intelligence. But I absolutely did not make an assumption that intelligence equates to good. I said it was irrelevant...
Idk, you certainly seemed to be implying that especially in your earlier comment. I would agree that it is orthogonal, I would think most rationalists would, too.
I promise you you misread. I think this is probably the problem sentence
I'll also add that the vast majority of people I know are very peaceful. But neither of these means I don't know malicious people. You'd need to change "The vast majority" to "Every" for this to be the conclusion. I'm not discounting malicious smart people, I'm pointing out that it is a weird assumption to make when most people we know are kind and peaceful.
The second comment is explicit though
This is not equivalent to "We can conclude that greater intelligence results in less malice." Those are completely different claims.