Comment by p1necone
3 days ago
I feel like I'm going insane reading how people talk about "vulnerabilities" like this.
If you give an llm access to sensitive data, user input and the ability to make arbitrary http calls it should be blindingly obvious that it's insecure. I wouldn't even call this a vulnerability, this is just intentionally exposing things.
If I had to pinpoint the "real" vulnerability here, it would be this bit, but the way it's just added as a sidenote seems to be downplaying it: "Note: Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data."
These aren't vulnerabilities in LLMs. They are vulnerabilities in software that we build on top of LLMs.
It's important we understand them so we can either build software that doesn't expose this kind of vulnerability or, if we build it anyway, we can make the users of that software aware of the risks so they can act accordingly.
Right; the point is that it's the software that gives "access to sensitive data, user input and the ability to make arbitrary http calls" to the LLM.
People don't think of this as a risk when they're building the software, either because they just don't think about security at all, or because they mentally model the LLM as unerringly subservient to the user — as if we'd magically solved the entire class of philosophical problems Asimov pointed out decades ago without even trying.
[dead]