Comment by simonw
5 hours ago
It doesn't work. You can't trust LLMs to 100% reliably obey delimiters or structure in content. That's why prompt injection is a problem in the first place.
5 hours ago
It doesn't work. You can't trust LLMs to 100% reliably obey delimiters or structure in content. That's why prompt injection is a problem in the first place.
No comments yet
Contribute on Hacker News ↗