Comment by warkdarrior

6 months ago

So you have some hierarchy of LLMs. The first LLM that sees the prompt is vulnerable to prompt injection.