Comment by nneonneo
2 days ago
LLMs can consume input that is entirely invisible to humans (white text in PDFs, subtle noise patterns in images, etc), and likewise encode data completely invisibly to humans (steganographic text), so I think the game is lost as soon as you depend on a human to verify that the input/output is safe.
No comments yet
Contribute on Hacker News ↗