Comment by mjr00
2 years ago
Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?
Not sure how a Go engine really applies. Do you consider cars superintelligent because they can move faster than any human?
2 years ago
Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?
Not sure how a Go engine really applies. Do you consider cars superintelligent because they can move faster than any human?
I'm with you here, but it should be noted that while the combustion engine has augmented our day to day lives for the better and our society overall, it's actually a great example of a technology that has been used to enable the killing of 100s of millions of people by those exact types of shady institutions and individuals the commenter made reference to. You don't need something "super intelligent" to cause a ton of harm.
Yes just like the car and electric grid.
> Mind defining "likely" and "soon" here? Like 10% chance in 100 years, or 90% chance in 1 year?
We're just past the Chicago pile days of LLMs [1]. Sutsever believes Altman is running a private Manhattan project in OpenAI. I'd say the evidence for LLMs having superintelligence capability is on shakier theoretical ground today than nuclear weapons were in 1942, but I'm no expert.
Sutsever is an expert. He's also conflicted, both in his opposition to OpenAI (reputationally) and pitching of SSI (financially).
So I'd say there appears to be a disputed but material possibility of LLMs achieving something that, if it doesn't pose a threat to our civilisation per se, does as a novel military element. Given that risk, it makes sense to be cautious. Paradoxically, however, that risk profile calls for strict regulation approaching nationalisation. (Microsoft's not-a-taker takeover of OpenAI perhaps providing an enterprising lawmaker the path through which to do this.)
[1] https://en.wikipedia.org/wiki/Chicago_Pile-1