← Back to context

Comment by Terr_

12 hours ago

We kinda need to architect things with the assumption that all token-output from an LLM can be unpredictably sneaky and malicious.

Alas, humans suck at constant vigilance, we're built to avoid it whenever possible, so a "reverse centaur" future of "do what the AI says but only if you see it's good" is going to suck.

I built my own IDE to replace vscode / cursor so I could design the harness and ensure that the model tool access was secure and limited. But the rest of the industry is YOLO