Comment by Wowfunhappy
5 hours ago
The way to solve it is to make the AI “smart” enough to understand it’s being tricked, and refuse.
Whether this is possible depends almost entirely on how much better we’re able to make these LLMs before (if) we hit a wall. Everyone has a different opinion on this and I absolutely don’t know the answer.
It’s not possible to make the AI smart enough to avoid being tricked. If the AI can run curl it will run curl.