Comment by simonw
3 days ago
This isn't a bug in the LLMs. It's a bug in the software that uses those LLMs.
An LLM on its own can't execute code. An LLM harness like Antigravity adds that ability, and if it does it carelessly that becomes a security vulnerability.
No matter how many prompt changes you make it won't be possible to fix this.
Right; so the point is to be more careful about the other side of the "agent" equation.
So, what's your conclusion from that bit of wisdom?