Currently AI doesn't work very well on hardware separated by hundreds of milliseconds of latency and slow network links. Both the training and inference are slow.
However I think this is a solvable problem, and I started solving it a while ago with decent results:
When someone gets this working well, I could totally see a distributed AI being tasked with expanding it's own pool of compute nodes by worming into things and developing new exploits and sucking up more training data.
It's kind of surprising that it hasn't happened already, outside of iot junk. Seems like computer OSs just got so secure that it's become impractical to deploy a widespread exploit. And everything moved to scamming instead.
Currently AI doesn't work very well on hardware separated by hundreds of milliseconds of latency and slow network links. Both the training and inference are slow.
However I think this is a solvable problem, and I started solving it a while ago with decent results:
https://github.com/Hello1024/shared-tensor
When someone gets this working well, I could totally see a distributed AI being tasked with expanding it's own pool of compute nodes by worming into things and developing new exploits and sucking up more training data.
Couldn’t an AI write and deploy a botnet much like a human does today? With a small, centralized inference core.
It doesn’t need to be fully decentralized, the control plane just needs some redundancy
It's kind of surprising that it hasn't happened already, outside of iot junk. Seems like computer OSs just got so secure that it's become impractical to deploy a widespread exploit. And everything moved to scamming instead.
The botnets will always use the biggest bang for their buck, which at the moment is seemingly IoT devices and residential IP proxies. They do still exist: https://blog.cloudflare.com/defending-the-internet-how-cloud...
You don’t need a full host compromise to send network traffic