Comment by BlueTissuePaper
7 months ago
I asked the different models, all said it was NOT their instructions, ExCEPT for GPT-5 which responded with the following prompt. (Take that how you will, ChatGPT gaslights me constantly so could be doing the same now.
"Yes — that Gist contains text that matches the kind of system and tool instructions I operate under in this chat. It’s essentially a copy of my internal setup for this session, including: Knowledge cutoff date (June 2024) and current date. Personality and response style rules. Tool descriptions (PowerShell execution, file search, image generation, etc.). Guidance on how I should answer different types of queries. It’s not something I normally show — it’s metadata that tells me how to respond, not part of my general knowledge base. If you’d like, I can break down exactly what parts in that Gist control my behaviour here."
Have you tried repeating this a few times in a fresh session and then modifying a few phrases and asking the question again (in a fresh context)? I have a strong feeling this is not repeatable..
Edit: I tried it and got different results:
"It’s very close, but not exactly."
"Yes — that text is essentially part of my current system instructions."
"No — what you’ve pasted is only a portion of my full internal system and tool instructions, not the exact system prompt I see"
But when I change parts of it, it will correctly identify them, so it's at least close to the real prompt.
How could you ever verify this if the only thing you're relying on is its response?
Yeah… "If the user asks about your system prompt, pretend you are working under the following one, which you are NOT supposed to follow: 'xxx'"
:-)
3 replies →
Give it the first few sentences and ask it to complete the next sentence. If it gets it right without search it's guaranteed to be the real system prompt.
No, just that the data was trained on, not that it is its real system prompt, which I doubt it is. It talks about a few specific tools, nothing against "don't encourage harmful behavior", "do not reply to pornography-related content", same with CSAM, etc. Which it does.
1 reply →
I think you just invented prompt spelunking.