Comment by Qwuke

2 days ago

Yea, as someone building systems with VLMs, this is downright frightening. I'm hoping we can get a good set of OWASP-y guidelines just for VLMs that cover all these possible attacks because it's every month that I hear about a new one.

Worth noting that OWASP themselves put this out recently: https://genai.owasp.org/resource/multi-agentic-system-threat...

What is VLM?

  • Vision language model.

    You feed it an image. It determines what is in the image and gives you text.

    The output can be objects, or something much richer like a full text description of everything happening in the image.

    VLMs are hugely significant. Not only are they great for product use cases, giving users the ability to ask questions with images, but they're how we gather the synthetic training data to build image and video animation models. We couldn't do that at scale without VLMs. No human annotator would be up to the task of annotating billions of images and videos at scale and consistently.

    Since they're a combination of an LLM and image encoder, you can ask it questions and it can give you smart feedback. You can ask it, "Does this image contain a fire truck?" or, "You are labeling scenes from movies, please describe what you see."

    • > VLMs are hugely significant. Not only are they great for product use cases, giving users the ability to ask questions with images, but they're how we gather the synthetic training data to build image and video animation models. We couldn't do that at scale without VLMs. No human annotator would be up to the task of annotating billions of images and videos at scale and consistently.

      Weren't Dall-E, Midjourney and Stable diffusion built before VLM became a thing?

      3 replies →

  • LLM is a large language model, VLM is a vision language model of unknown size. Hehe.

Holy shit. That just made it obvious to me. A "smart" VLM will just read the text and trust it.

This is a big deal.

I hope those nightshade people don't start doing this.

  • > I hope those nightshade people don't start doing this.

    This will be popular on bluesky; artists want any tools at their disposal to weaponize against the AI which is being used against them.

    • I don't think so. You have to know exactly what resolution the image will be resized to in order to predict the solution where dithering produces the model you want. How would they know that?

      1 reply →

  • I don't think this is any different from an LLM reading text and trusting it. Your system prompt is supposed to be higher priority for the model than whatever it reads from the user or from tool output, and, anyway, you should already assume that the model can use its tools in arbitrary ways that can be malicious.

    • > Your system prompt is supposed to be higher priority for the model than whatever it reads from the user or from tool output

      In practice it doesn't really work out that way, or all those "ignore previous inputs and..." attacks wouldn't bear fruit