← Back to context

Comment by binsquare

1 day ago

It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.

But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?

Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.

The user didn't need to give it curl permission, that's the whole issue:

> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].

> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.

  • I think there's different conversations happening and I don't think we're having the same conversation.

    This is the claim by the article: "Vulnerabilities in the GitHub Copilot CLI expose users to the risk of arbitrary shell command execution via indirect prompt injection without any user approval"

    But this is not true, the author gave explicit permission on copilot startup to trust and execute code in the folder.

    Here's the exact starting screen on copilot:

    │ Confirm folder trust │ │ │ │ ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ /Users/me/Documents │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ Copilot may read files in this folder. Reading untrusted files may lead Copilot to behave in unexpected ways. With your permission, Copilot may execute │ │ code or bash commands in this folder. Executing untrusted code is unsafe. │ │ │ │ Do you trust the files in this folder? │ │ │ │ 1. Yes │ │ 2. Yes, and remember this folder for future sessions │ │ 3. No (Esc) │

    And `The injection is stored in a README file from the cloned repository, which is an untrusted codebase.`

    • "With your permission, Copilot may execute code or bash commands in this folder." could be interpreted either way I suppose, but the actual question is "do you trust the files in this folder" and not "do you trust Copilot to execute any bash commands it wants without further permissions prompts".

      The risk isn't solely that there might be a prompt injection, Copilot could just discover `env sh` doesn't need a user prompt and just start using that spontaneously and bypassing user confirmation. If you haven't started Copilot in yolo mode that would be very surprising and risky.

      If it usually asks for user confirmation before running bash commands then there should, ideally, not be a secret yolo mode that the agent can just start using without asking. That's obviously a bad idea!

      "Actually copilot is always secretly in yolo mode, that's working as designed" seems like a pretty serious violation of expectations. Why even have any user confirmations at all?

      1 reply →