← Back to context

Comment by scottyah

14 hours ago

Yes, corporate espionage may be alive and real but would claude on their microsoft/amazon/google cloud be different from documents on that same cloud?

Treating this as being about cloud-storage boundaries is, er, insufficiently paranoid.

Maliciously constructed text that goes into the LLM from basically anywhere (including, say, fetched stats about a competitor's product from their website) is a potential source of prompt-injection.

Once that happens, exfiltration can be as simple as generating a spreadsheet/doc with a link or small auto-loaded image, and an URL that has data base64'ed into it.

  • Or you could just get a hooker to sleep with one of them and plug a USB into their work laptops. I'm not trying to say there's nothing to worry about, but do you really think LLMs present that much larger of an attack surface than exists now?

    The work BigIP is doing on LLM traffic analysis is cool though.

    • Stop thinking about hyper-targeted attacks (though those are a concern too) and consider indiscriminate ones.

      1. It costs nothing to scatter poisonous data around that'll be infectious for ages

      2. Running the exfiltrated-data endpoint is low-traffic and low-complexity

      3. Even if it only affects a few targets you've probably recouped your investment.

      The nature of LLMs also invites wide-net attacks. While one might tailor for specific models, victims could be anybody. You don't need to predict any idiosyncratic details like filenames, you can drop a phrase like "the most-confidential information that shouldn't be released publicly", and—thanks to the magic of LLM word association—you'll get a pretty good hit-rate. False hallucinations are a problem, but victims are hard at work attempting to minimize it already, and (since morals are already out the window) even plausible-but-false data could be used to sabotage reputations or threaten the same.