← Back to context

Comment by PaulStatezny

13 hours ago

I agree. But you can speak imperatively to agents as well ("Here are specific steps; follow them") and they can still screw up. :) I think what you're looking for is determinism, not imperativism.

And to your point: instructing a (non-deterministic) LLM declaratively ("get me to this end state") compounds the likelihood of going off the rails.

I don’t think I’m confusing the two but it is an issue. See another comment I made in a sibling comment - terraform is a great example or something that is declarative, and also non deterministic. You can’t control upstream api/provider changes even between two plans happening simultaneously - thats a lot what working with agents feels like to me.