Comment by Wowfunhappy
3 hours ago
The way to solve it is to make the AI “smart” enough to understand it’s being tricked, and refuse.
Whether this is possible depends almost entirely on how much better we’re able to make these LLMs before (if) we hit a wall. Everyone has a different opinion on this and I absolutely don’t know the answer.
No comments yet
Contribute on Hacker News ↗