Comment by binsquare

1 day ago

If the user is working in a folder where copilot can discover a malicious `env sh` to run, the user should not give permission to trust the files in the folder.

I think it's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt. That is a valid limitation of LLM supported agentic workflows today.

But that's not what this article claims. The article claims that there was no user approval and no user interaction beyond initial query and that the copilot is downloading + executing malware.

I'm saying this is sensationalized and not a novel technical vulnerability write up.

The author explicitly gave approval for copilot to trust "untrusted repository". Crafted a file which had instructions to do a curl command despite the warnings on copilot start up. It is not operating secretly in yolo mode.

If the claim of the article is "Copilot doesn't gate tool calls with env", I'd have a different response. But I also have to mention, you can tune approved tool calls.