Comment by GistNoesis
12 hours ago
TLDR: It's easy : LLM outputs are untrusted. Agents by virtue of running untrusted inputs are malware. Handle them like the malware they are.
>>> "While this web site was obviously made by an LLM" So I am expecting to trust the LLM written security model https://news.ycombinator.com/item?id=47510746
No comments yet
Contribute on Hacker News ↗