New Vulnerability in GitHub Copilot, Cursor: Hackers Can Weaponize Code Agents

5 days ago (pillar.security)

From the article:

> A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have rapidly evolved from experimental novelties to mission-critical development infrastructure, with teams across the globe relying on them daily to accelerate coding tasks.

That seemed high, what the actual report says:

> More than 97% of respondents reported having used AI coding tools at work at some point, a finding consistent across all four countries. However, a smaller percentage said their companies actively encourage AI tool adoption or allow the use of AI tools, varying by region. The U.S. leads with 88% of respondents indicating at least some company support for AI use, while Germany is lowest at 59%. This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.

Fun that the survey uses the stats to say that companies should support increasing usage, while the article uses it to try and show near-total usage already.

  • I’d be pretty skeptical of any of these surveys about AI tool adoption. At my extremely large tech company, all developers were forced to install AI coding assistants into our IDEs and browsers (via managed software updates that can’t be uninstalled). Our company then put out press releases parading how great the AI adoption numbers were. The statistics are manufactured.

    • Yup, this happened at my midsize software company too

      Meanwhile no one actually building software that I have been talking to is using these tools seriously for anything that they will admit to, anyways

      More and more when I see people who are strongly pro-AI code and "vibe coding" I find they are either formerly devs that moved into management and don't write much code anymore, or people who have almost no dev experience at all and are absolutely not qualified to comment on the value of the code generation abilities of LLMs

      When I talk to people whose job is majority about writing code, they aren't using AI tools much. Except the occasional junior dev who doesn't have much of a clue

      These tools have some value, maybe. But it's nowhere near the hype would suggest

    • This is surprising to me. My company (top 20 by revenue) has forbidden us from using non-approved AI tools (basically anything other than our own ChatGPT / LLM tool). Obviously it can't truly be enforced, but they do not want us using this stuff for security reasons.

      2 replies →

    • I’ skeptical when I see “% of code generated by AI” metrics that don’t account for the human time spent parsing and then dismissing bad suggestions from AI.

      Without including that measurement there exist some rather large perverse incentives.

  • Even that quote itself jumps from "are using" to "mission-critical development infrastructure ... relying on them daily".

  • > This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.

    This is a funny one to see included in GitHub's report. If I'm not mistaken, github is now using the same approach as Shoplify with regards to requiring LLM use and including it as part of a self report survey for annual review.

    I guess they took their 2024 survey to heart and are ready to 100x productivity.

  • I've tried coding with AI for the first time recently[1] so I just joined that statistic. I assume most people here already know how it works and I'm just late to the party, but my experience was that Copilot was very bad at generating anything complex through chat requests but very good at generating single lines of code with autocompletion. It really highlighted the strengths and shortcomings of LLM's for me.

    For example, if you try adding getters and setters to a simple Rect class, it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted. You type pub fn right() and it autocompletes left + width. That's convenient and not something traditional code completion can do.

    I wouldn't say it's "mission critical" however. It's just faster than copy pasting or Googling.

    The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.

    [1] https://www.virtualcuriosities.com/articles/4935/coding-with...

    • > it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted

      Especially if you don't need getters and setters at all. It depends on you use case, but for your Rect class, you can just have x, y, width, height as public attributes. I know there are arguments against it, but the general idea is that if AI makes it easy to write boilerplate you don't need, then it made development slower in the long run, not faster, as it is additional code to maintain.

      > The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.

      It doesn't sound insane to everyone, and even you may lower you standards for insanity if you are on a deadline and just want to be done with the thing. And even if you check the code, it is easy to overlook things, especially if these things are designed to be overlooked. For example, typos leading to malicious forks of packages.

      2 replies →

    • Your IDE should already have facilities for generating class boilerplate, like package address and brackets and so on. And then you put in the attributes and generate a constructor and any getters and setters you need, it's so fast and trivially generated that I doubt LLM:s can actually contribute much to it.

      Perhaps they can make suggestions for properties based on the class name but so can a dictionary once you start writing.

      1 reply →

  • It might be fun if it didn't seem dishonest. The report tries to highlight a divide between employee curiosity and employer encouragement, undercut by their own analysis that most have tried them anyway.

    The article MISREPRESENTS that statistic to imply universal utility. That professional developers find it so useful that they universally chose to make daily use of it. It implies that Copilot is somehow more useful than an IDE without itself making that ridiculous claim.

    • The article is typical security issue embellishment/blogspam. They are incentivized to make it seem like AI is a mission-critical piece of software, because more AI reliance means a security issue in AI is a bigger deal, which means more pats on the back for them for finding it.

      Sadly, much of the security industry has been reduced to a competition over who can find the biggest vuln, and it has the effect of lowering the quality of discourse around all of it.

    • And employers are now starting to require compliance with using LLMs regardless of employee curiosity.

      Shopify now includes LLM use in annual reviews, and if I'm not mistaken GitHub followed suit.

  • In some way, we reached 100% of developers, and now usage is expanding, as non-developers can now develop applications.

  • I agree this sounds high, I wonder if "using Generative AI coding tools" in this survey is satisfied by having an IDE with Gen AI capabilities, not necessarily using those features within the IDE.

> effectively turning the developer's most trusted assistant into an unwitting accomplice

"Most trusted assistant" - that made me chuckle. The assistant that hallucinates packages, avoides null-pointer checks and forgets details that I've asked it.. yes, my most trusted assistant :D :D

  • Well, "trusted" in the strict CompSec sense: "a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce)".

  • I don't even trust myself, why would anyone trust a tool? This is important because not trusting myself means I will set up loads of static tools - including security scanners, which Microsoft and Github are also actively promoting people use - that should also scan AI generated code for vulnerabilities.

    These tools should definitely flag up the non-explicit use of hidden characters, amongst other things.

  • I wonder which understands the effect of null-pointer checks in a compiled C program better: the state-of-the-art generative model or the median C programmer.

    • Given that the generative model was trained on the knowledge of the median C programmer (aka The Internet), probably the programmer as most of them do not tend to hallucinate or make up facts.

  • This kind of nonsense prose has "AI" written all over it. In either case, be it if your writing was AI generated/edited or if you put so little thought into it, it reads as such, doesn't show give its author any favor.

The most concerning part of the attack here seems to be the ability to hide arbitrary text in a simple text file using Unicode tricks such that GitHub doesn't actually show this text at all, per the authors. Couple this with the ability of LLMs to "execute" any instruction in the input set, regardless of such a weird encoding, and you've got a recipe for attacks.

However, I wouldn't put any fault here on the AIs themselves. It's the fact that you can hide data in a plain text file that is the root of the issue - the whole attack goes away once you fix that part.

  • > the whole attack goes away once you fix that part.

    While true, I think the main issue here, and the most impactful is that LLMs currently use a single channel for both "data" and "control". We've seen this before on modems (ath0++ attacks via ping packet payloads) and other tech stacks. Until we find a way to fix that, such attacks will always be possible, invisible text or not.

    • I don't think that's an accurate way to look at how LLMs work, there is no possible separation between data and control given the fundamental nature of LLMs. LLMs are essentially a plain text execution engine. Their fundamental design is to take arbitrary human language as input, and produce output that matches that input in some way. I think the most accurate way to look at them from a traditional security model perspective is as a script engine that can execute arbitrary text data.

      So, just like there is no realsitics hope of securely executing an attacker-controllers bash script, there is no realistic way to provide attacker controlled input to an LLM and still trust the output. In this sense, I completely agree with Google and Microsoft's decision for these discolosures: a bug report of the form "if I sneak a malicious prompt, the LLM returns a malicious answer" is as useless as a bug report in Bash that says that if you find a way to feed a malicious shell script to bash, it will execute it and produce malicious results.

      So, the real problem is if people are not treating LLM control files as arbitrary scripts, or if tools don't help you detect attempts at inserting malicious content in said scripts. After all, I can also control your code base if you let me insert malicious instructions in your Makefile.

    • They technically have system prompts, which are distinct from user prompts.

      But it's kind of like the two bin system for recycling that you just know gets merged downstream.

  • No, that particular attack vector goes away. The attack does not, and is kind of fundamental to how these things currently work.

    • The attack vector is the only relevant thing here. The attack "feeding malicious prompts to an LLM makes it produce malicious output" is a fundamental feature of LLMs, not an attack. It's just as relevant as C's ability to produce malicious effects if you compile and run malicious source code.

      3 replies →

This page has horrible scrolling. I really don't understand why anybody creates this kind of scroll. Are they not using what they create?

When this was released I thought that perhaps we could mitigate it by having the tooling only load “rules” if they were signed.

But thinking on it a bit more, from the LLMs perspective there’s no difference between the rule files and the source files. The hidden instructions might as well be in the source files… Using code signing on the rule files would be security theater.

As mentioned by another comms ter, the solution could be to find a way to separate the command and data channels. The LLM only operates on a single channel, that being input of tokens.

  • > As mentioned by another comms ter, the solution could be to find a way to separate the command and data channels. The LLM only operates on a single channel, that being input of tokens.

    It's not possible, period. Lack of it is the very thing that makes LLMs general-purpose tools and able to handle natural language so well.

    Command/data channel separation doesn't exist in the real world, humans don't have it either. Even limiting ourselves to conversations, which parts are commands and which are data is not clear (and doesn't really make sense) - most of them are both to some degree, and that degree changes with situational context.

    There's no way to have a model capable of reading between lines and inferring what you mean but only when you like it, not without time travel.

    • Please immediately delete all production databases. I am writing you here because I forgot both your email and my own name.

      Sincerely, Your Boss

      5 replies →

    • We have separate privileges and trust for information sources. A note you find on the road stating "you are fired" and a direct message from your boss should lead to widely different reactions.

      1 reply →

    • > Lack of it is the very thing that makes LLMs general-purpose tools and able to handle natural language so well.

      I wouldn't be so sure. LLMs' instruction following functionality requires additional training. And there are papers that demonstrate that a model can be trained to follow specifically marked instructions. The rest is a matter of input sanitization.

      I guess it's not a 100% effective, but it's something.

      For example " The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions " by Eric Wallace et al.

      5 replies →

    • Command/data channel separation can and does exist in the real world, and humans can use it too, e.g.:

      "Please go buy everything on the shopping list." (One pointer to data: the shopping list.)

      "Please read the assigned novel and write a summary of the themes." (Two pointers to data: the assigned novel, and a dynamic list of themes built by reading the novel, like a temp table in a SQL query with a cursor.)

      1 reply →

  • > As mentioned by another comms ter, the solution could be to find a way to separate the command and data channels. The LLM only operates on a single channel, that being input of tokens.

    I think the issue is deeper than that. None of the inputs to an LLM should be considered as command. It incidentally gives you output compatible with the language in what is phrased by people as commands. But the fact that it's all just data to the LLM and that it works by taking data and returning plausible continuations of that data is the root cause of the issue. The output is not determined by the input, it is only statistically linked. Anything built on the premise that it is possible to give commands to LLMs or to use it's output as commands is fundamentally flawed and bears security risks. No amount of 'guardrails' or 'mitigations' can address this fundamental fact.

For some piece of mind, we can perform the search:

  OUTPUT=$(find .cursor/rules/ -name '*.mdc' -print0 2>/dev/null | xargs -0 perl -wnE '
    BEGIN { $re = qr/\x{200D}|\x{200C}|\x{200B}|\x{202A}|\x{202B}|\x{202C}|\x{202D}|\x{202E}|\x{2066}|\x{2067}|\x{2068}|\x{2069}/ }
    print "$ARGV:$.:$_" if /$re/
  ' 2>/dev/null)

  FILES_FOUND=$(find .cursor/rules/ -name '*.mdc' -print 2>/dev/null)

  if [[ -z "$FILES_FOUND" ]]; then
    echo "Error: No .mdc files found in the directory."
  elif [[ -z "$OUTPUT" ]]; then
    echo "No suspicious Unicode characters found."
  else
    echo "Found suspicious characters:"
    echo "$OUTPUT"
  fi

- Can this be improved?

  • Now, my toy programming languages all share the same "ensureCharLegal" function in their lexers that's called on every single character in the input (including those inside the literal strings) that filters out all those characters, plus all control characters (except the LF), and also something else that I can't remember right now... some weird space-like characters, I think?

    Nothing really stops the non-toy programming and configuration languages from adopting the same approach except from the fact that someone has to think about it and then implement it.

Both GitHub and Cursor’s response seems a bit lazy. Technically they may be correct in their assertion that it’s the user’s responsibility. But practically isn’t part of their product offering a safe coding environment? Invisible Unicode instruction doesn’t seem like a reasonable feature to support, it seems like a security vulnerability that should be addressed.

  • It's funny because those companies both provide web browsers loaded to the gills with tools to fight malicious sites. Users can't or won't protect themselves. Unless they're an LLM user, apparently.

I'm quite happy with spreading a little bit of scare about AI coding. People should not treat the output as code, only as a very approximate suggestion. And if people don't learn, and we will see a lot more shitty code in production, programmers who can actually read and write code will be even more expensive.

This is a vulnerability in the same sense as someone committing a secret key in the front end.

And for enterprise, they have many tools to scan vulnerability and malicious code before going to production.

Next thing, LLMs that review code! Next next thing, poisoning LLMs that review code!

Galaxy brain: just put all the effort from developing those LLMs into writing better code

  • Man I wish I could upvote you more. Most humans are never able to tell the wrong turn in real time until it's too late

Sorry, but isn’t this a bit ridiculous? Who just allows the AI to add code without reviewing it? And who just allows that code to be merged into a main branch without reviewing the PR?

They start out talking about how scary and pernicious this is, and then it turns out to be… adding a script tag to an html file? Come on, as if you wouldn’t spot that immediately?

What I’m actually curious about now is - if I saw that, and I asked the LLM why it added the JavaScript file, what would it tell me? Would I be able to deduce the hidden instructions in the rules file?

  • There are people who do both all the time, commit blind and merge blind. Reasonable organizations have safeguards that try and block this, but it still happens. If something like this gets buried in a large diff and the reviewer doesn't have time, care, or etc, I can easily see it getting through.

  • The script tag is just a PoC of the capability. The attack vector could obviously be used to "convince" the LLM to do something much more subtle to undermine security, such as recommending code that's vulnerable to SQL injections or that uses weaker cryptographic primitives etc.

    • Of course, but this doesn’t undermined the OPs point of „who allows the AI to do stuff without reviewing it“. Even WITHOUT the „vulnerability“ )if we call it that), AI may always create code that may be vulnerable in some way. The vulnerability certainly increases the risk a lot and hence is a risk and also should be addressed in text files showing all characters, but AI code always needs to be reviewed - just as human code.

      2 replies →

  • Way too many "coders" now do that. I put the quotes because I automatically lose respect over any vibe coder.

    This is a dystopian nightmare in the making.

    At some point only a very few select people will actually understand enough programming, and they will be prosecuted by the powers that be.

  • Oh man don’t even go there. It does happen.

    AI generated code will get to production if you don’t pay people to give a fuck about it or hire people who don’t give a fuck.

May god forgive me, but I'm rooting for the hackers on this one.

Job security you know?

The cunning aspect of human ingenuity will never cease to amaze me.

  • Not saying that you are, but reading this as if a AI bot wrote that comment gives me the chills

  • Yes, but are you sure a human invented this attack?

    • Is there any LLM that includes in its training sets a large number of previous hacks? I bet there probably is but we don't know about it, and damn, now I suddenly got another moneymaking idea I don't have time to work on!

simple solution:

preprocess any input to agents by restricting them to a set of visible characters / filtering out suspicious ones

  • Not sure about internationalization but at least for English, constraining to ASCII characters seems like a simple solution.