I'm pretty sure it has always been. Nothing that exposes a way to do general-purpose computation (either intentionally or not) can in any imaginable way be called "secure" in the sense that a printed page is secure.
I'd be careful assuming that is completely true. Image recognition models can/do have their own set of attacks against them that may not be easily noticeable to humans. My first thought on this is injecting noise into images that can be picked up as instructions to the LLM when it decodes the printed page.
I'm pretty sure it has always been. Nothing that exposes a way to do general-purpose computation (either intentionally or not) can in any imaginable way be called "secure" in the sense that a printed page is secure.
oh sure...with all the easily forged watermarks, seals, and signatures...
Highly secure.
I'd be careful assuming that is completely true. Image recognition models can/do have their own set of attacks against them that may not be easily noticeable to humans. My first thought on this is injecting noise into images that can be picked up as instructions to the LLM when it decodes the printed page.