← Back to context

Comment by simonw

3 days ago

More reports of similar vulnerabilities in Antigravity from Johann Rehberger: https://embracethered.com/blog/posts/2025/security-keeps-goo...

He links to this page on the Google vulnerability reporting program:

https://bughunters.google.com/learn/invalid-reports/google-p...

That page says that exfiltration attacks against the browser agent are "known issues" that are not eligible for reward (they are already working on fixes):

> Antigravity agent has access to files. While it is cautious in accessing sensitive files, there’s no enforcement. In addition, the agent is able to create and render markdown content. Thus, the agent can be influenced to leak data from files on the user's computer in maliciously constructed URLs rendered in Markdown or by other means.

And for code execution:

> Working with untrusted data can affect how the agent behaves. When source code, or any other processed content, contains untrusted input, Antigravity's agent can be influenced to execute commands. [...]

> Antigravity agent has permission to execute commands. While it is cautious when executing commands, it can be influenced to run malicious commands.

> While it is cautious in accessing sensitive files, there’s no enforcement.

I don't understand why this isn't a day 0 feature. Like... what? I was hacking together my own CLI coding agent and... like just don't give it shell access for starters. It needs like 4 tools: read file, list files, patch file, search. Just write those yourself. Don't hand it off to bash. Want to read a sensitive file? Access denied. Want to list files but some of them might be secret env files? Don't even list them so the LLM doesn't even know they exist. Want to search the whole codebase? Fine, but automatically skip over sensitive files.

Why is this hard? I don't get it.

Is it the definition of "sensitive file"? Just let the user choose. Maybe provide a default list of globs to ignore but let the SWEs extend it with their own denylist.

  • The problem is that coding agents with Bash are massively more useful than coding agents without Bash, because they can execute the code they are writing to see if it works.

    But the moment you let an agent run arbitrary code to test it out that agent can write code to do anything it likes, including reading files.

As much as I hate to say it, the fact that the attacks are “known issues” seems well known in the industry among people who care about security and LLMs. Even as an occasional reader of your blog (thank you for maintaining such an informative blog!), I know about the lethal trifecta and the exfiltration risks since early ChatGPT and Bard.

I have previously expressed my views on HN about removing one of the three lethal trifecta; it didn’t go anywhere. It just seems that at this phase, people are so excited about the new capabilities LLMs can unlock that they don’t care about security.

  • I have a different perspective. The Trifecta is a bad model because it makes people think this is just another cybersecurity challenge, solvable with careful engineering. But it's not.

    It cannot be solved this way because it's a people problem - LLMs are like people, not like classical programs, and that's fundamental. That's what they're made to be, that's why they're useful. The problems we're discussing are variations of principal/agent problem, with LLM being the savant but extremely naive agent. There is no probable, verifiable solution here, not any more than when talking about human employees, contractors, friends.

    • >There is no probable, verifiable solution here, not any more than when talking about human employees, contractors, friends.

      Well when talking about employees etc, one model to protect against malicious employees is to require every sensitive action (code check in, log access, prod modification) to require approval from a 2nd person. That same model can be used for agents. However, agents, known to be naive, might not be a good approver. So having a human approve everything the agent does could be a good solution.

  • Then, the goal must be to guide users to run Antigravity in a sandbox, with only the data or information that it must access.