Comment by bossyTeacher
7 months ago
You are thinking about it at the wrong level. This is like saying human language in the middle ages and before is not possible because it's virtually impossible to get a large number of iliterate humans to discuss what syntactical rules and phonemes should their language use without actually using a language to discuss it!
The most likely way by which exfiltration could happen is simply by making humans trust AI for a long enough time to be conferred greater responsibilities (and thus greater privileges). Plus current LLMs have no sense of self as their memory is short but future ones will likely be different.
No comments yet
Contribute on Hacker News ↗