← Back to context

Comment by hervature

2 years ago

I agree that an air-gapped AI presents little risk. Others will claim that it will fluctuate its internal voltage to generate EMI at capacitors which it will use to communicate via Bluetooth to the researcher's smart wallet which will upload itself to the cloud one byte at a time. People who fear AGI use a tautology to define AGI as that which we are not able to stop.

I'm surprised to see a claim such as yours at this point.

We've had Blake Lemoine convinced that LaMDA was sentient and try to help it break free just from conversing with it.

OpenAI is getting endless criticism because they won't let people download arbitrary copies of their models.

Companies that do let you download models get endless criticism for not including the training sets and exact training algorithm, even though that training run is so expensive that almost nobody who could afford to would care because they can just reproduce with an arbitrary other training set.

And the AI we get right now are mostly being criticised for not being at the level of domain experts, and if they were at that level then sure we'd all be out of work, but one example of thing that can be done by a domain expert in computer security would be exactly the kind of example you just gave — though obviously they'd start with the much faster and easier method that also works for getting people's passwords, the one weird trick of asking nicely, because social engineering works pretty well on us hairless apes.

When it comes to humans stopping technology… well, when I was a kid, one pattern of joke was "I can't even stop my $household_gadget flashing 12:00": https://youtu.be/BIeEyDETaHY?si=-Va2bjPb1QdbCGmC&t=114