Comment by hn_acc1
24 days ago
Once we've solved social engineering scams, we can iterate 10x as hard and solve LLM prompt injection. /s
It's like having 100 "naive/gullible people" who are good at some math/english but don't understand social context, all with your data available to anyone who requests it in the right way..
No comments yet
Contribute on Hacker News ↗