← Back to context

Comment by scottyah

11 hours ago

Or you could just get a hooker to sleep with one of them and plug a USB into their work laptops. I'm not trying to say there's nothing to worry about, but do you really think LLMs present that much larger of an attack surface than exists now?

The work BigIP is doing on LLM traffic analysis is cool though.

Stop thinking about hyper-targeted attacks (though those are a concern too) and consider indiscriminate ones.

1. It costs nothing to scatter poisonous data around that'll be infectious for ages

2. Running the exfiltrated-data endpoint is low-traffic and low-complexity

3. Even if it only affects a few targets you've probably recouped your investment.

The nature of LLMs also invites wide-net attacks. While one might tailor for specific models, victims could be anybody. You don't need to predict any idiosyncratic details like filenames, you can drop a phrase like "the most-confidential information that shouldn't be released publicly", and—thanks to the magic of LLM word association—you'll get a pretty good hit-rate. False hallucinations are a problem, but victims are hard at work attempting to minimize it already, and (since morals are already out the window) even plausible-but-false data could be used to sabotage reputations or threaten the same.