Comment by lelanthran
18 hours ago
Yeah, this is what your agents do even before someone tries to trick them into doing something stupid.
Remember this: these things follow instructions so poorly that they nuke everything without anyone even trying to break the prompt. Imagine how easily someone could break the prompt if the agent ever gets given user input.
No comments yet
Contribute on Hacker News ↗