Comment by calibas

1 day ago

I see an enormous threat here, I think you're just scratching the surface.

You have a customer facing LLM that has access to sensitive information.

You have an AI agent that can write and execute code.

Just image what you could do if you can bypass their safety mechanisms! Protecting LLMs from "social engineering" is going to be an important part of cybersecurity.

> You have a customer facing LLM that has access to sensitive information.

Why? You should never have an LLM deployed with more access to information than the user that provides its inputs.

  • Having sensitive information is kind of inherent to the way the training slurps up all the data these companies can find. The people who run chatgpt don't want to dox people but also don't want to filter its inputs. They don't want it to tell you how to kill yourself painlessly but they want it to know what the symptoms of various overdoses are.

> You have a customer facing LLM that has access to sensitive information…You have an AI agent that can write and execute code.

Don’t do that then?

Seems like a pretty easy fix to me.

Yes, agents. But for that, I think that the usual approaches to censor LLMs are not going to cut it. It is like making a text box smaller on a web page as a way to protect against buffer overflows, it will be enough for honest users, but no one who knows anything about cybersecurity will consider it appropriate, it has to be validated on the back end.

In the same way a LLM shouldn't have access to resources that shouldn't be directly accessible to the user. If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.

  • > If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI.

    Tricking it into writing malware isn't the big problem that I see.

    It's things like prompt injections from fetching external URLs, it's going to be a major route for RCE attacks.

    https://blog.trailofbits.com/2025/10/22/prompt-injection-to-...

    There's plenty of things we should be doing to help mitigate these threats, but not all companies follow best practices when it comes to technology and security...

It's a stochastic process. You cannot guarantee its behavior.

> customer facing LLM that has access to sensitive information.

This will leak the information eventually.