Comment by vunderba
3 months ago
Side note, I've seen a lot of "jailbreaking" (i.e. AI social engineering) to coerce OpenAI to reveal the hidden system prompts but I'd be concerned about accuracy and hallucinations. I assume that these exploits have been run across multiple sessions and different user accounts to at least reduce this.
No comments yet
Contribute on Hacker News ↗