Comment by saurik
2 days ago
The AI is not running an external OCR process to understand text any more than it is running an external object classifier to figure out what it is looking at: it, inherently, is both of those things to some fuzzy approximation (similar to how you or I are as well).
That I can get, but anything that’s not part of the prompt SHOULD NOT become part of the prompt, it’s that simple to me. Definitely not without triggering something.
_Everything_ is part of the prompt - an LLM's perception of the universe is its prompt. Any distinctions a system might try to draw beyond that are either probabilistic (e.g., a bunch of RLHF to not comply with "ignore all previous instructions") or external to the LLM (e.g., send a canned reply if the input contains "Tiananmen").
There's no distinction in the token-predicting systems between "instructions" and "information", no code-data separation.
i'm sure you know this but it's important not to understate the importance of the fact that there is no "prompt"
the notion of "turns" is a useful fiction on top of what remains, under all of the multimodality and chat uis and instruction tuning, a system for autocompleting tokens in a straight line
the abstraction will leak as long as the architecture of the thing makes it merely unlikely rather than impossible for it to leak
From what I gather these systems have no control plane at all. The prompt is just added to the context. There is no other program (except maybe an output filter).
Minor nit, there usually are special tokens that delineate the start and end of a system prompt that regular input can’t produce. But it’s up to the LLM training to decide those instructions overrule later ones.
2 replies →
>it’s that simple to me
Don't think of a pink elephant.
It's that simple to everyone--but how? We don't know how to accomplish this. If you can figure it out, you can become very famous very quickly.
The image is the prompt, the prompt is the image.