Comment by falcor84

2 years ago

Is that really "a lot of assumptions" that a piece of software can clone itself? We've been cloning and porting software from system to system for over 70 years (ENIAC was released in 1946 and some of its programs were adapted for use in EDVAC in 1951) - why would it be a problem for a "super intelligence"?

And even if it was originally designed to run on some really unique ASIC hardware, by the Church–Turing thesis it can be emulated on any other hardware. And again, if it's a "super intelligence", it should be at least as good at porting itself as human engineers have been for the three generations.

Am I introducing even one novel assumption here?

My point was that we don't have super intelligent AGI so there is little to suggest it will just be software.

Even the state of the art systems we have today need to be running on some pretty significant hardware to be performant right?

  • A "state of the art" system would almost by definition be running on special and expensive hardware. But I have llama3 running on my laptop, and it would have been considered state of the art less than 2 years ago.

    A related point to consider is that a superintelligence should be considered a better coder than us, so the risk isn't only directly from it "copying" itself, but also from it "spawning" and spreading other, more optimized (in terms of resources utilization) software that would advance its goals.

    • I guess, it’s hard to even imagine a super intelligence, would be coding. Who knows what would be going on. It really is sci-fi.