Comment by layer8
19 days ago
Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would. Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.
You're kind of jumping around in scope here, and I think you got it a little wrong.
>Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would.
Removing a single node from a massively parallel system doesn't kill the system either, it only removes one node, another node will spin up and replace it, just like a failing cell would in a biological system. One liver cell doesn't do anything for the host on its own, it's part of a massively parallel system.
> Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.
Removing some "parts" from a cell certainly could kill it, or a million other things that lead to apoptosis, or turn it cancerous. But "parts" isn't analogous to software, DNA is. The same goes for a single node in a system - remove part of the code and you're going to have big problems - for that node. But probably won't kill the other nodes in the system (though a virus could).
There are 3 Billion base pairs in Human DNA. I could imagine that there's more than 3 billion lines of code running important things throughout the world right now. Maybe even in one system, or likely soon there will be. With "AI" doing the coding that number is going to explode, without anyone able to understand all of it. And so I could imagine that "AI" will probably lead to some kind of "digital cancer", the same way there are viruses and other analogues to biological systems.
> "AI" will probably lead to some kind of "digital cancer"
Gosh I've just imagined someone asking an AI agent to code a computer virus to infect software "X". The virus' code will be wonderfully complex and therefore so will the response of the AI responsible for keeping "X" uninfected and in good working order.
I was imagining code becoming awesomely complex even without the adversarial element in play.
Replying to myself here. Maybe coding will eventually be simply learning how to give an AI the right prompt. e.g. instead of
we will instead do:
Then the virus writer's job changes into jailbreaking the AI. Obviously with an AI's assistance...?
Then it would be logical to have a single AI on the phone managing all the prompts in parallel: e.g. "Hey AI, be android by doing [actions]", "Hey AI, be firefox...", "Hey AI, be snapchat...", "Hey AI, be [insert app name]...".