Comment by DSingularity
1 day ago
What’s the value of your generalization here? When it comes to LLMs the futility of trying to avoid leaking the system prompt seems valid considering the arbitrary natural language input/output nature of LLMs. The same “arbitrary” input doesn’t really hold elsewhere or to the same significance.
No comments yet
Contribute on Hacker News ↗