Comment by hn_acc1

25 days ago

Once we've solved social engineering scams, we can iterate 10x as hard and solve LLM prompt injection. /s

It's like having 100 "naive/gullible people" who are good at some math/english but don't understand social context, all with your data available to anyone who requests it in the right way..