Comment by duskwuff
2 years ago
Generally speaking: if you can get the model to regurgitate the exact same system prompt across multiple sessions, using different queries to elicit that response, it's probably legit. If it were hallucinated, you'd expect it to vary.
No comments yet
Contribute on Hacker News ↗