← Back to context

Comment by TeMPOraL

6 days ago

Same is true with people - repeat attempts at social engineering will eventually succeed. We deal with that by a combination of training, segregating responsibilities, involving multiple people in critical decisions, and ultimately, by treating malicious attempts at fooling people as felonies. Same is needed with LLMs.

In context of security, it's actually helpful to anthropomorphize LLMs! They are nowhere near human, but they are fundamentally similar enough to have the same risks and failure modes.