← Back to context

Comment by kfarr

16 days ago

What else is an LLM supposed to do with this prompt? If you don’t want something done, why are you calling it? It’d be like calling an intern and saying you don’t want anything. Then why’d you call? The harness should allow you to deny changes, but the LLM has clearly been tuned for taking action for a request.

Ask if there is something else it could do? Ask if it should make changes to the plan? Reiterate that it's here to help with anything else? Tf you mean "what else is it suppose to do", it's supposed to do the opposite of what it did.

  • I think there is some behind the scenes prompting from claude code for plan vs build mode, you can even see the agent reference that in it's thought trace. Basically I think the system is saying "if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan" and it looks to me(?) like the user switched from plan to build mode and then sent "no".

    From our perspective it's very funny, from the agents perspective maybe very confusing.

I'd want two things:

First, that It didn't confuse what the user said with it's system prompt. The user never told the AI it's in build mode.

Second, any person would ask "then what do you want now?" or something. The AI must have been able to understand the intent behind a "No". We don't exactly forgive people that don't take "No" as "No"!

Seems like LLMs are fundamentally flawed as production-worthy technologies if they, when given direct orders to not do something, do the thing

for the same reason `terraform apply` asks for confirmation before running - states can conceivably change without your knowledge between planning and execution. maybe this is less likely working with Claude by yourself but never say never... clearly, not all behavior is expected :)

> What else is an LLM supposed to do with this prompt?

Maybe I saw the build plan and realized I missed something and changed my mind. Or literally a million other trivial scenarios.

What an odd question.

  • > What an odd question.

    I don't see anything odd about this question.

    What kind of response did the user expect to get from LLM after spending this request and what was the point of sending it in the first place?

    • Genuine questions: what do you think the request was? To build the plan? To prepare the commit? Do you never have a second thought after looking at your output, or realize you forgot something you wanted to include? Could it be that they saw "one new function", thought "boy, there should really be two... what happened?" and changed their mind?

      To your original comment, it would be like calling your intern to ask them to order lunch, and them letting you know the sandwich place you asked them to order from was closed, and should they just put in an order for next Tuesday at an entirely different restaurant instead? And then that intern hearing, "no, that's not what I want" saying "well, I don't respect your 'no'" and doing it anyways.

      "Do X" -> "Here are the anticipated actions (which might deviate from your explicit intent), should I implement?" -> "no, that's not actually what I want"

      is a clear instruction set and a completely normal thought pattern.

      1 reply →

Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

(Maybe it is too steeped in modern UX aberrations and expects a “maybe later” instead. /s)

  • > Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

    Because it doesn’t actually understand what a yes-no question is.