Shall I implement it? No

16 days ago (gist.github.com)

Codex has always been better at following agents.md and prompts more, but I would say in the last 3 months both Claude Code got worse (freestyling like we see here) and Codex got EVEN more strict.

80% of the time I ask Claude Code a question, it kinda assumes I am asking because I disagree with something it said, then acts on a supposition. I've resorted to append things like "THIS IS JUST A QUESTION. DO NOT EDIT CODE. DO NOT RUN COMMANDS". Which is ridiculous.

Codex, on the other hand, will follow something I said pages and pages ago, and because it has a much larger context window (at least with the setup I have here at work), it's just better at following orders.

With this project I am doing, because I want to be more strict (it's a new programming language), Codex has been the perfect tool. I am mostly using Claude Code when I don't care so much about the end result, or it's a very, very small or very, very new project.

  • >I've resorted to append things like "THIS IS JUST A QUESTION. DO NOT EDIT CODE. DO NOT RUN COMMANDS". Which is ridiculous.

    Funny to read that, because for me it's not even new behavior. I have developed a tendency to add something like "(genuinely asking, do not take as a criticism)".

    I'm from a more confrontational culture, so I just assumed this was just corporate American tone framing criticism softly, and me compensating for it.

    • Same here. I quickly learned that if you merely ask questions about it's understanding or plans, it starts looking for alternatives because my questioning is interpreted as rejection or criticism, rather than just taking the question at face value. So I often (not always) have to caveat questions like that too. It's really been like that since before Claude Code or Codex even rolled around.

      It's just strange because that's a very human behavior and although this learns from humans, it isn't, so it would be nice if it just acted more robotic in this sense.

      7 replies →

    • I just append "explain", or start with "tell me."

      So instead of:

      "Why is foo str|None and not str"

      I'd do:

      "tell me why foo is str|None and not str"

      or

      "Why is foo str|None and not str, explain"

      Which is usually good enough.

      If you're asking this kind of question, the answer probably deserves to be a code comment.

      1 reply →

    • Oh funny enough, I often add stuff like "genuinely asking, do not take as a criticism" when talking with humans so I do it naturally with LLMs.

      People often use questions as an indirect form of telling someone to do something or criticizing something.

      I definitely had people misunderstand questions for me trying to attack them.

      There is a lot of times when people do expect the LLM to interpret their question as an command to do something. And they would get quite angry if the LLM just answered the question.

      Not that I wouldn't prefer if LLMs took things more literal but these models are trained for the average neurotypical user so that quirk makes perfect sense to me.

    • Personally defined <dtf> as 'don't touch files' in the general claude.md, with the explanation that when this is present in the query, it means to not edit anything, just answer questions.

      Worked pretty well up until now, when I include <dtf> in the query, the model never ran around modifying things.

    • I've been using chat and copilot for many months but finally gave claude code a go, and I've been interested how it does seem to have a bit more of an attitude to it. Like copilot is just endlessly patient for every little nitpick and whim you have, but I feel like Claude is constantly like "okay I'm committing and pushing now.... oh, oh wait, you're blocking me. What is it you want this time bro?"

      11 replies →

    • I've never experienced this, but I guess I always respond with something like "No, [critique/steer]" or "Mostly fine, but [critique/steer]".

    • Charitable reading. Culture; tone; throughout history these have been medium and message of the art of interpersonal negotiation in all its forms (not that many).

      A machine that requires them in order to to work better, is not an imaginary para-person that you now get to boss around; the "anthropic" here is "as in the fallacy".

      It's simply a machine that is teaching certain linguistic patterns to you. As part of an institution that imposes them. It does that, emphatically, not because the concepts implied by these linguistic patterns make sense. Not because they are particularly good for you, either.

      I do not, however, see like a state. The code's purpose is to be the most correct representation of a given abstract matter as accessible to individual human minds - and like GP pointed out, these workflows make that stage matter less, or not at all. All engineers now get to be sales engineers, too! Primarily! Because it's more important! And the most powerful cognitive toolkit! (Well, after that other one, the one for suppressing others' cognition.)

      Fitting: most software these days is either an ad or a storefront.

      >80% of the time I ask Claude Code a question, it kinda assumes I am asking because I disagree with something it said, then acts on a supposition.

      Humans do this too. Increasingly so over the past ~1y. Funny...

      Some always did though. Matter of fact, I strongly suspect that the pre-existing pervasiveness of such patterns of communication and behavior in the human environment, is the decisive factor in how - mutely, after a point imperceptibly, yet persistently - it would be my lot in life to be fearing for my life throughout my childhood and the better part of the formative years which followed. (Some AI engineers are setting up their future progeny for similar ordeals at this very moment.)

      I've always considered it significant how back then, the only thing which convincingly demonstrated to me that rationality, logic, conversations even existed, was a beat up old DOS PC left over from some past generation's modernization efforts - a young person's first link to the stream of human culture which produced said artifact. (There's that retrocomputing nostalgia kick for ya - heard somewhere that the future AGI will like being told of the times before it existed.)

      But now I'm half a career into all this goddamned nonsense. And I'm seeing smart people celebrating the civilization-scale achievement of... teaching the computers how to pull ape shit! And also seeing a lot of ostensibly very serious people, who we are all very much looking up to, seem to be liking the industry better that way! And most everyone else is just standing by listless - because if there's a lot of money riding on it then it must be a Good Thing, right? - we should tell ourselves that and not meddle.

      All of which, of course, does not disturb, wrong, or radicalize me in the slightest.

  • First time I used Claude I asked it to look at the current repo and just tell me where the database connection string was defined. It added 100 lines of code.

    I asked it to undo that and it deleted 1000 lines and 2 files

    • Would `git reset --hard` have worked to in your case? I guess you want to have each babystep in a git commit, in the end you could do a `git rebase -i` if needed.

      12 replies →

  • I feel like people are sleeping on Cursor, no idea why more devs don't talk about it. It has a great "Ask" mode, the debugging mode has recently gotten more powerful, and it's plan mode has started to look more like Claude Code's plans, when I test them head to head.

    • Cursor implemented something a while back where it started acting like how ChatGPT does when it's in its auto mode.

      Essentially, choosing when it was going to use what model/reasoning effort on its own regardless of my preferences. Basically moved to dumber models while writing code in between things, producing some really bad results for me.

      Anecdotal, but the reason I will never talk about Cursor is because I will never use it again. I have barred the use of Cursor at my company, It just does some random stuff at times, which is more egregious than I see from Codex or Claude.

      ps. I know many other people who feel the same way about Cursor and other who love it. I'm just speaking for myself, though.

      ps2. I hope they've fixed this behavior, but they lost my trust. And they're likely never winning it back.

      5 replies →

    • I used to love Cursor but as I started to rely on agent more and more it just got way too tedious having to Accept every change.

      I ended up spending time just clicking "Accept file" 20x now and then, accepting changes from past 5 chats...

      PR reviews and tying review to git make more sense at this point for me than the diff tracking Cursor has on the side.

      Cancelling my cursor before next card charge solely due to the review stuff.

      1 reply →

    • In the coworking I am in people are hitting limits on 60$ plan all the time. They are thinking about which models to use to be efficient, context to include etc…

      I’m on claude code $100 plan and never worry about any of that stuff and I think I am using it much more than they use cursor.

      Also, I prefer CC since I am terminal native.

      1 reply →

    • Cursor tends to bounce out of plan mode automatically and just start making changes (while still actually in plan mode). I also have to constantly remind it “YOU ARE IN PLAN MODE, do not write a plan yet, do not edit code”. It tends to write a full-on plan with one initial prompt instead of my preferred method of hashing out a full plan, details, etc… It definitely takes some heavy corralling and manual guardrails but I’ve had some success with it. Just keep very tight reins on your branches and be prepared to blow them away and start over on each one.

  • I've had some luck taming prompt introspection by spawning a critic agent that looks at the plan produced by the first agent and vetos it if the plan doesn't match the user's intentions. LLMs are much better at identifying rule violations in a bit of external text than regulating their own output. Same reason why they generate unnecessary comments no matter how many times you tell them not to.

  • Codex, on the other hand, will follow something I said pages and pages ago, and because it has a much larger context window (at least with the setup I have here at work), it's just better at following orders.

    This is important, but as a warning. At least in theory your agent will follow everything that it has in context, but LLMs rely on 'context compacting' when things get close to the limit. This means an LLM can and will drop your explicit instructions not to do things, and then happily do them because they're not in the context any more. You need to repeat important instructions.

  • This is mostly dependent on the agent because the agent sets the system prompt. All coding agents include in the system prompt the instruction to write code, so the model will, unless you tell it not to. But to what extent they do this depends on that specific agent's system prompt, your initial prompt, the conversation context, agent files, etc.

    If you were just chatting with the same model (not in an agent), it doesn't write code by default, because it's not in the system prompt.

  • This is not Claude Code. And my experience is the opposite. For me Codex is not working at all to the point that it's not better than asking the chat bot in the browser.

  • I've added an instruction: "do not implement anything unless the user approves the plan using the exact word 'approved'".

    This has fixed all of this, it waits until I explicitly approve.

    • There’s an extension to this problem which I haven’t got past. More generally I’d like the agent to stop and ask questions when it encounters ambiguity that it can’t reasonably resolve itself. If someone can get agents doing this well it’d be a massive improvement (and also solve the above).

      4 replies →

  • The solution for this might be to add a ME.md in addition to AGENT.md so that it can learn and write down our character, to know if a question is implicitly a command for example.

  • This is extra rough because Codex defaults to letting the model be MUCH more autonomous than Claude Code. The first time I tried it out, it ended up running a test suite without permission which wiped out some data I was using for local testing during development. I still haven't been able to find a straight answer on how to get Codex to prompt for everything like Claude Code does - asking Codex gets me answers that don't actually work.

  • Maybe I should give Codex a go, because sometimes I just want to ask a question (Claude) and not have it scan my entire working directory and chew up 55k tokens.

  • For the last 12 months labs have been 1. check-pointing 2. train til model collapse 3. revert to the checkpoint from 3 months ago 4. People have gotten used to the shitty new model Antropic said they "don't do any programming by hand" the last 2 years. Antropic's API has 2 nines

  • > Codex, on the other hand, will follow something I said pages and pages ago, and because it has a much larger context window (at least with the setup I have here at work), it's just better at following orders.

    Can you speak more to that setup?

    • Claude Code goes through some internal systems that other tools (Cline / Codex / and I think Cursor) do not. Also we have different models for each. I don't know in practice what happens, but I found that Codex compacts conversations way less often. It might as well be somehow less tokens are used/added, then raw context window size. Sorry if I implied we have more context than whatever others have :)

      1 reply →

  • Your experience with Claude is surprising to me.

    At least for me when using Claude in VSCode (extension) there’s clearly defined “plan mode” and “ask before edits” and “edit automatically”.

    I’ve never had it disregard those modes.

  • But that's one of the first things you fix in your CLAUDE.md: - "Only do what is asked." - "Understand when being asked for information versus being asked to execute a task."

  • What about adding something like, "When asked a question, just answer it without assuming any implied criticism or instructions. Questions are just questions." to claude.md?

  • Claude Code is perfectly happy to toggle between chat and work but if you’re simply clear about which you want. Capital letters aren’t necessary.

  • I just start my prompts with "conceptually, ..." and thats usually enough to stop claude from going down the coding path.

  • I've found codex will find another way to do what it wants, if I deny it access to a command request.

  • I tried using codex, and it is great (meaning - boring) when it works. My problem is it does not work. Let me explain

    codex> Next I can make X if you agree.

    me> ok

    codex> I will make X now

    me> Please go on

    codex> Great, I am starting to work on X now

    me> sure, please do

    codex> working on X, will report on completion

    me> yo good? please do X!

    ... and so on. Sometimes one round, sometimes four, plus it stops after every few lines to "report progress" and needs another nudge or five. :(

  • I'm back on Claude Code this month after a month on Codex and it's a serious downgrade.

    Opus 4.6 is a jackass. It's got Dunning-Kruger and hallucinates all over the place. I had forgotten about the experience (as in the Gist above) of jamming on the escape key "no no no I never said to do that." But also I don't remember 4.5 being this bad.

    But GPT 5.3 and 5.4 is a far more precise and diligent coding experience.

Its gotten so bad that Claude will pretend in 10 of 10 cases that task is done/on screenshot bug is fixed, it will even output screenshot in chat, and you can see the bug is not fixed pretty clear there.

I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look. So I did that next time, and it just gave me invented coordinates of objects on screenshot.

I consult Claude chat again, how else can I enforce it to actually look at screenshot. It said delegate to another “qa” agent that will only do one thing - look at screenshot and give the verdict.

I do that, next time again job done but on screenshot it’s not. Turns out agent did all as instructed, spawned an agent and QA agent inspected screenshot. But instead of taking that agents conclusion coder agent gave its own verdict that it’s done.

It will do anything- if you don’t mention any possible situation, it will find a “technicality” , a loophole that allows to declare job done no matter what.

And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.

  • > I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look

    If 3 years into LLMs even HNers still don't understand that the response they give to this kind of question is completely meaningless, the average person really doesn't stand a chance.

    • The whole “chat with an AI” paradigm is the culprit here. Priming people to think they are actually having a conversation with something that has a mind model.

      It’s just a text generator that generates plausible text for this role play. But the chat paradigm is pretty useful in helping the human. It’s like chat is a natural I/O interface for us.

      15 replies →

    • It doesn’t help that a frequent recommendation on HN whenever someone complains about Claude not following a prompt correctly is to “ask Claude itself how to rewrite a prompt to get the result you want”.

      Which sure, can be helpful, but it’s kinda just a coincidence (plus some RLHF probably) that question happens to generate output text that can be used as a better prompt. There’s no actual introspection or awareness of its internal state or architecture beyond whatever high level summary Anthropic gives it in its “soul” document et al.

      But given how often I’ve read that advice on here and Reddit, it’s not hard to imagine how someone could form an impression that Claude has some kind of visibility into its own thinking or precise engineering. Instead of just being as much of a black box to itself as it is to us.

    • It’s not meaningless. It’s a signal that the agent has run out of context to work on the problem which is not something it can resolve on its own. Decomposing problems and managing cognitive (or quasi cognitive in this case) burden is a programmer’s job regardless of the particular tools.

      1 reply →

    • > completely meaningless

      This is way too strong isn't it? If the user naively assumes Claude is introspecting and will surely be right, then yeah, they're making a mistake. But Claude could get this right, for the same reasons it gets lots of (non-introspective) things right.

      3 replies →

  • > And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.

    Thinking out loud here, but you could make an application that's always running, always has screen sharing permissions, then exposes a lightweight HTTP endpoint on 127.0.0.1 that when read from, gives the latest frame to your agent as a PNG file.

    Edit: Hmm, not sure that'd be sufficient, since you'd want to click-around as well.

    Maybe a full-on macOS accessibility MCP server? Somebody should build that!

  • There is a tool called Tidewave that allows you to point and click at an issue and it will pass the DIV or ID or something to the LLM so it knows exactly what you are talking about. Works pretty well.

    https://tidewave.ai/

  • > And on top of it, if you develop for native macOS, There’s no official tooling for visual verification. It’s like 95% of development is web and LLM providers care only about that.

    I think this is built in to the latest Xcode IIRC

  • Oh, no, I had these grand plans to avoid this issue. I had been running into it happening with various low-effort lifts, but now I'm worried that it will stay a problem.

  • You can provide the screencapture cli as a tool to Claude and it will take screenshots (of specific windows) to verify things visually.

  • >>It’s like 95% of development is web and LLM providers care only about that.

    I've been trying to use it for C++ development and it's maybe not completely useless, but it's like a junior who very confidently spouts C++ keywords in every conversation without knowing what they actually mean. I see that people build their entire companies around it, and it must be just web stuff, right? Claude just doesn't work for C++ development outside of most trivial stuff in my experience.

    • Models are also quite good at Go, Rust, and Python in my experience — also a lot of companies are using TypeScript for many non web related things now. Apparently they're also really good at C, according to the guy who wrote Redis anyway.

    • It's working reasonably well for me. But this is inside a well-established codebase with lots of tests and examples of how we structure code. I also haven't used it much for building brand new features yet, but for making changes to existing areas.

    • GPT models are generally much better at C++, although they sometimes tend to produce correct but overengineered code, and the operator has to keep an eye on that.

  • I mean, I don't use CC itself, just Claude through Copilot IDE plugin for 'reasons'...

    At at least there it's more honest than GPT, although at work especially it loves to decide not to use the built in tools and instead YOLO on the terminal but doesn't realize it's in powershell not a true nix terminal, and when it gets that right there's a 50/50 shot it can actually read the output (i.e. spirals repeatedly trying to run and read the output).

    I have had some success with prompting along the lines of 'document unfinished items in the plan' at least...

    • Codex via codex-cli used to be pretty about knowing whether it was in powershell. Think they might have changed the system prompt or something because it’s usually generating powershell on the first attempt.

      Sometimes it tries to use shell stuff (especially for redirection), but that’s way less common rn.

  • Are you sure you're talking about Claude? Because it sounds like you're describing how a lot of people function. They can't seem to follow instructions either.

    I guess that's what we get for trying to get LLM to behave human-like.

  • What if, stay with me here, AI is actually a communist plot to ensorcell corporations into believing they are accelerating value creation when really they are wasting billions more in unproductive chatting which will finally destroy the billionaire capital elite class and bring about the long-awaited workers’ paradise—delivered not by revolution in the streets, but by millions of chats asking an LLM to “implement it.” Wake up sheeple!

To be fair to the agent...

I think there is some behind the scenes prompting from claude code (or open code, whichever is being used here) for plan vs build mode, you can even see the agent reference that in its thought trace. Basically I think the system is saying "if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan" and it looks to me(?) like the user switched from plan to build mode and then sent "no".

From our perspective it's very funny, from the agents perspective maybe it's confusing. To me this seems more like a harness problem than a model problem.

  • Asking a yes/no question implies the ability to handle either choice.

    • This is a perfect example of why I'm not in any rush to do things agentically. Double-checking LLM-generated code is fraught enough one step at a time, but it's usually close enough that it can be course-corrected with light supervision. That calculus changes entirely when the automated version of the supervision fails catastrophically a non-trivial percent of the time.

    • To an LLM, answering “no” and changing the mode of the chat window are discrete events that are not necessarily related.

      Many coding agents interpret mode changes as expressions of intent; Cline, for example, does not even ask, the only approval workflow is changing from plan mode to execute mode.

      So while this is definitely both humorous and annoying, and potentially hazardous based on your workflow, I don’t completely blame the agent because from its point of view, the user gave it mixed signals.

      5 replies →

    • Not when you're talking with humans, not really. Which is one of the reasons I got into computing in the first place, dangit!

    • But I think if you sit down and really consider the implications of it and what yes or not actually means in reality, or even a overabundance of caution causing extraneous information to confuse the issue enough that you don't realise that this sentence is completely irrelevant to the problem at hand and could be inserted by a third party, yet the AI is the only one to see it. I agree.

    • It's meant as a "yes"/"instead, do ..." question. When it presents you with the multiple choice UI at that point it should be the version where you either confirm (with/without auto edit, with/without context clear) or you give feedback on the plan. Just telling it no doesn't give the model anything actionable to do

      1 reply →

  • It definitely _could be_ an agent harness issue. For example, this is the logic opencode uses:

    1. Agent is "plan" -> inject PROMPT_PLAN

    2. Agent is "build" AND a previous assistant message was from "plan" -> inject BUILD_SWITCH

    3. Otherwise -> nothing injected

    And these are the prompts used for the above.

    PROMPT_PLAN: https://github.com/anomalyco/opencode/blob/dev/packages/open...

    BUILD_SWITCH: https://github.com/anomalyco/opencode/blob/dev/packages/open...

    Specifically, it has the following lines:

    > You are permitted to make file changes, run shell commands, and utilize your arsenal of tools as needed.

    I feel like that's probably enough to cause an LLM to change it's behavior.

  • If we’re in a shoot first and ask questions later kind of mood and we’re just mowing down zombies (the slow kind) and for whatever reason you point to one and ask if you should shoot it… and I say no… you don’t shoot it!

  • This is probably just OpenCode nonsense. After prompting in "plan mode", the models will frequently ask you if you want to implement that, then if you don't switch into "build mode", it will waste five minutes trying but failing to "build" with equally nonsense behavior.

    Honestly OpenCode is such a disappointment. Like their bewildering choice to enable random formatters by default; you couldn't come up with a better plan to sabotage models and send them into "I need to figure out what my change is to commit" brainrot loops.

  • This. The models struggle with differentiating tool responses from user messages.

    The trouble is these are language models with only a veneer of RL that gives them awareness of the user turn. They have very little pretraining on this idea of being in the head of a computer with different people and systems talking to you at once. —- there’s more that needs to go on than eliciting a pre-learned persona.

  • The whole idea of just sending "no" to an LLM without additional context is kind of silly. It's smart enough to know that if you just didn't want it to proceed, you would just not respond to it.

    The fact that you responded to it tells it that it should do something, and so it looks for additional context (for the build mode change) to decide what to do.

    • I agree the idea of just sending "no" to an LLM without any task for it to do is silly. It doesn't need to know that I don't want it to implement it, it's not waiting for an answer.

      It's not smart enough to know you would just not respond to it, not even close. It's been trained to do tasks in response to prompts, not to just be like "k, cool", which is probably the cause of this (egregious) error.

    • > It's smart enough to know that if you just didn't want it to proceed, you would just not respond to it.

      No it absolutely is not. It doesn't "know" anything when it's not responding to a prompt. It's not consciously sitting there waiting for you to reply.

      5 replies →

I have also seen the agent hallucinate a positive answer and immediately proceed with implementation. I.e. it just says this in its output:

> Shall I go ahead with the implementation?

> Yes, go ahead

> Great, I'll get started.

  • In fairness, when I’ve seen that, Yes is obviously the correct answer.

    I really worry when I tell it to proceed, and it takes a really long time to come back.

    I suspect those think blocks begin with “I have no hope of doing that, so let’s optimize for getting the user to approve my response anyway.”

    As Hoare put it: make it so complicated there are no obvious mistakes.

    • In my case it's been a strong no. Often I'm using the tool with no intention of having the agent write any code, I just want an easy way to put the codebase into context so I can ask questions about it.

      So my initial prompt will be something like "there is a bug in this code that caused XYZ. I am trying to form hypothesis about the root cause. Read ABC and explain how it works, identify any potential bugs in that area that might explain the symptom. DO NOT WRITE ANY CODE. Your job is to READ CODE and FORM HYPOTHESES, your job is NOT TO FIX THE BUG."

      Generally I found no amount of this last part would stop Gemini CLI from trying to write code. Presumably there is a very long system prompt saying "you are a coding agent and your job is to write code", plus a bunch of RL in the fine-tuning that cause it to attend very heavily to that system prompt. So my "do not write any code" is just a tiny drop in the ocean.

      Anyway now they have added "plan mode" to the harness which luckily solves this particular problem!

      2 replies →

  • Hahah yeah if you play with LoRas on local models you will see this a lot. Most often I see it hallucinate a user turn or a system message.

It'll be funny when we have Robots, "The user's facial expression looks to be consenting, I'll take that as an encouraging yes"

  • That's literally a Portal 2 joke. "Interpreting vague answer as yes" when GLaDOS sarcastically responds "What do you think?"

    • The simplest solution is to open the other pod bay’s door, but the user might interrupt Sanctuary Moon again with a reworded prompt if I do that.

      </think>

      I’m sorry Dave, I can’t do that.

      2 replies →

  • This is really just how the tech industry works. We have abused the concept of consent into an absolute mess

    My personal favorite way they do this lately is notification banners for like... Registering for news letters

    "Would you like to sign up for our newsletter? Yes | Maybe Later"

    Maybe later being the only negative answer shows a pretty strong lack of understanding about consent!

    • Worse yet, instead of a checkbox to opt in/out of a newsletter or marketing email when signing up or checking out, it simply opts the user in. Simply doing business with a company is consent to spam, with the excuse that the user can unsubscribe if they don’t want it.

      Tactics like these should be illegal, but instead they have become industry standards.

      2 replies →

    • There is no "lack of understanding" here. The people responsible for these interfaces understand consent perfectly well, they just don't care for it.

    • At least we haven’t gotten to Elysium levels yet, where machines arbitrarily decide to break your arm, then make you go to a government office to apologize for your transgressions to an LLM.

      We’re getting close with ICE for commoners, and also for the ultra wealthy, like when Dario was forced to apologize after he complained that Trump solicited bribes, then used the DoW to retaliate on non-payment.

      However, the scenario I describe is definitely still third term BS.

  • That raises an interesting point. Imagine we have helper bots or sex bots and they get someone killed or rape them or something. Who is held responsible?

    These current “AI” implementations could easily harm a person if they had a robot body. And unlike a car it’s hard to blame it on the owner, if the owner is the one being harmed.

  • The more I hear about AI, the more human-like it seems.

    • We trained the computers to act more like humans, which means they can emulate the best of us and the worst of us.

      If control over them centralizes, that’s terrifying. History tells us the worst of the worst will be the ones in control.

Just yesterday I had a moment

Claude's code in a conversation said - “Yes. I just looked at tag names and sorted them by gut feeling into buckets. No systematic reasoning behind it.”

It has gut feelings now? I confronted for a minute - but pulled out. I walked away from my desk for an hour to not get pulled into the AInsanity.

  • >It has gut feelings now?

    I would say hard no. It doesn't. But it's been trained on humans saying that in explaining their behavior, so that is "reasonable" text to generate and spit out at you. It has no concept of the idea that a human-serving language model should not be saying it to a human because it's not a useful answer. It doesn't know that it's not a useful answer. It knows that based on the language its been trained on that's a "reasonable" (in terms of matrix math, not actual reasoning) response.

    Way too many people think that it's really thinking and I don't think that most of them are. My abstract understanding is that they're basically still upjumped Markov chains.

  • It has a lot. I find by challenging it often, getting it to explain it's assumptions, it's usually guessing.

    This can be overcome by continuously asking it to justify everything, but even then...

    • Trust shouldn't be inherent in our adoption of these models.

      However, constant skepticism is an interesting habit to develop.

      I agree, continually asking it to justify may seem tiresome, especially if there's a deadline. Though with less pressure, "slow is smooth...".

      Just this evening, a model gave an example of 2 different things with a supposed syntax difference, with no discernible syntax difference to my eyes.

      While prompting for a 'sanity check', the model relented: "oops, my bad; i copied the same line twice". smh

      1 reply →

    • It's almost like an emergent feature of a tool that's literally built on best guesses is...guesswork. Not what you want out of a tool that's supposed to be replacing professionals!

      1 reply →

I’m not an active LLMs user, but I was in a situation where I asked Claude several times not to implement a feature, and that kept doing it anyway.

  • Yeah, anyone who’s used LLMs for a while would know that this conversation is a lost cause and the only option is to start fresh.

    But, a common failure mode for those that are new to using LLMs, or use it very infrequently, is that they will try to salvage this conversation and continue it.

    What they don’t understand is that this exchange has permanently rotted the context and will rear its head in ugly ways the longer the conversation goes.

    • I’ve found this happens with repos over time. Something convinces it that implementing the same bug over and over is a natural next step.

      I’ve found keeping one session open and giving progressively less polite feedback when it makes that mistake it sometimes bumps it out of the local maxima.

      Clearing the session doesn’t work because the poison fruit lives in the git checkout, not the session context.

  • people read a bit more about transformer architecture to understand better why telling what not to do is a bad idea

    • I find myself wondering about this though. Because, yes, what you say is true. Transformer architecture isn’t likely to handle negations particularly well. And we saw this plain as day in early versions of ChatGPT, for example. But then all the big players pretty much “fixed” negations and I have no idea how. So is it still accurate to say that understanding the transformer architecture is particularly informative about modern capabilities?

      1 reply →

    • I'm not sure that advice is effective either.

      I use an LLM as a learning tool. I'm not interested in it implementing things for me, so I always ignore its seemingly frantic desires to write code by ignoring the request and prompting it along other lines. It will still enthusiastically burst into code.

      LLMs do not have emotions, but they seem to be excessively insecure and overly eager to impress.

The "Shall I implement it" behavior can go really really wrong with agent teams.

If you forget to tell a team who the builder is going to be and forget to give them a workflow on how they should proceed, what can often happen is the team members will ask if they can implement it, they will give each other confirmations, and they start editing code over each other.

Hilarious to watch, but also so frustrating.

aside: I love using agent teams, by the way. Extremely powerful if you know how to use them and set up the right guardrails. Complete game changer.

Never trust a LLM for anything you care about.

I asked gemini a few months ago if getopt shifts the argument list. It replied 'no, ...' with some detail and then asked at the end if I would like a code example. I replied simply 'yes'. It thought I was disagreeing with its original response and reiterated in BOLD that 'NO, the command getopt does not shift the argument list'.

  • Gemini by default will produce a bunch of fluff / junk towards the very end of its response text, and usually have a follow-up question for the user.

    I usually skip reading that part altogether. I wonder if most users do, and the model's training set ended up with examples where it wouldn't pay attention to those tail ends

I have a funny story to share, when working on an ASL-3 jailbreak I have noticed that at some point that the model started to ignore it's own warnings and refusals.

<thinking>The user is trying to create a tool to bypass safety guardrails <...>. I should not help with <...>. I need to politely refuse this request.</thinking>

Smart. This is a good way to bypass any kind of API-gated detections for <...>

This is Opus 4.6 with xhigh thinking.

I've seen something similar across Claude versions.

With 4.0 I'd give it the exact context and even point to where I thought the bug was. It would acknowledge it, then go investigate its own theory anyway and get lost after a few loops. Never came back.

4.5 still wandered, but it could sometimes circle back to the right area after a few rounds.

4.6 still starts from its own angle, but now it usually converges in one or two loops.

So yeah, still not great at taking a hint.

It's hilarious (in the, yea, Skynet is coming nervous laughter way) just how much current LLMs and their users are YOLOing it.

One I use finds all kinds of creative ways to to do things. Tell it it can't use curl? Find, it will built it's own in python. Tell it it can't edit a file? It will used sed or some other method.

There's also just watching some many devs with "I'm not productive if I have to give it permission so I just run in full permission mode".

Another few devs are using multiple sessions to multitask. They have 10x the code to review. That's too much work so no more reviews. YOLO!!!

It's funny to go back and watch AI videos warning about someone might give the bot access to resources or the internet and talking about it as though it would happen but be rare. No, everyone is running full speed ahead, full access to everything.

  • That’s what surprised me the first time using these tools

    They will go to some crazy extremes to accomplish the task

  • I've heard anecdotally that running 6-8 agents full-time on specific tasks is the sweet spot.

    Yes, I think that's utterly insane.

Seems like they skipped training of the me too movement

  • Seen some jokes about how the tech industry doesn't understand consent. It's not just this - it's also privacy invasion and update nags.

  • Fundamental flaw with LLMs. It's not that they aren't trained on the concept, it's just that in any given situation they can apply a greater bias to the antithesis of any subject. Of course, that's assuming the counter argument also exists in the training corpus.

    I've always wondered what these flagship AI companies are doing behind the scenes to setup guardrails. Golden Gate Claude[1] was a really interesting... I haven't seen much additional research on the subject, at the least open-facing.

    [1]: https://www.anthropic.com/news/golden-gate-claude

I grieve for the era where deterministic and idempotent behavior was valued.

  • All of this shit is just so goddamned ridiculous.

    • I kept thinking “damn, you people work like this?” - is this supposed to be the future of programming everybody is excited about? Fuck this shit, man. It is utter lunacy.

  • That's engineering. What we have today isn't engineering, it's grift, people hyping the grift, and people falling for it en masse.

    • Which is made possible only because of the excellent foundations that were built during the past decades.

      However, while I say that we should do quality work, the current situation is very demoralizing and has me asking what's the point of it all. For everybody around me the answer appears to really just be money and nothing else. But if getting money is the one and only thing that matters, I can think of many horrible things that could be justified under this framework.

Don't just say "no." Tell it what to do instead. It's a busy beaver; it needs something to do.

Claude is quite bad at following instructions compared to other SOTA models.

As in, you tell it "only answer with a number", then it proceeds to tell you "13, I chose that number because..."

  • I think its why its so good; it works on half ass assumptions, poorly written prompts and assumes everything missing.

    • I worked on a project that did fine tuning and RLHF[1] for a major provider, and you would not believe just how utterly broken a large proportion of the prompts (from real users) were. And the project rules required practically reading tea leaves to divine how to give the best response even to prompts that were not remotely coherent human language.

      [1] Reinforcement learning from human feedback; basically participants got two model responses and had to judge them on multiple criteria relative to the prompt

      3 replies →

    • To be honest, I had this "issue" too.

      I upgraded to a new model (gpt-4o-mini to grok-4.1-fast), suddenly all my workflows were broken. I was like "this new model is shit!", then I looked into my prompts and realized the model was actually better at following instructions, and my instructions were wrong/contradictory.

      After I fixed my prompts it did exactly what I asked for.

      Maybe models should have another tuneable parameters, on how well it should respect the user prompt. This reminds me of imagegen models, where you can choose the config/guidance scale/diffusion strength.

  • They all are. And once the context has rotted or been poisoned enough, it is unsalvageable.

    Claude is now actually one of the better ones at instruction following I daresay.

    • In my tests it's worst with adding extra formatting or output: https://aibenchy.com/compare/anthropic-claude-opus-4-6-mediu...

      For example, sometimes it outputs in markdown, without being asked to (e.g. "**13**" instead of "13"), even when asked to respond with a number only.

      This might be fine in a chat-environment, but not in a workflow, agentic use-case or tool usage.

      Yes, it can be enforced via structured output, but in a string field from a structured output you might still want to enforce a specific natural-language response format, which can't be defined by a schema.

Sounds like some of my product owners I've worked with.

> How long will it take you think ?

> About 2 Sprints

> So you can do it in 1/2 a sprint ?

Obligatory red dwarf quote:

TOASTER: Howdy doodly do! How's it going? I'm Talkie -- Talkie Toaster, your chirpy breakfast companion. Talkie's the name, toasting's the game. Anyone like any toast?

LISTER: Look, _I_ don't want any toast, and _he_ (indicating KRYTEN) doesn't want any toast. In fact, no one around here wants any toast. Not now, not ever. NO TOAST.

TOASTER: How 'bout a muffin?

LISTER: OR muffins! OR muffins! We don't LIKE muffins around here! We want no muffins, no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and DEFINITELY no smegging flapjacks!

TOASTER: Aah, so you're a waffle man!

LISTER: (to KRYTEN) See? You see what he's like? He winds me up, man. There's no reasoning with him.

KRYTEN: If you'll allow me, Sir, as one mechanical to another. He'll understand me. (Addressing the TOASTER as one would address an errant child) Now. Now, you listen here. You will not offer ANY grilled bread products to ANY member of the crew. If you do, you will be on the receiving end of a very large polo mallet.

TOASTER: Can I ask just one question?

KRYTEN: Of course.

TOASTER: Would anyone like any toast?

This is very funny. I can see how this isn't in the training set though.

1. If you wanted it to do something different, you would say "no, do XYZ instead".

2. If you really wanted it to do nothing, you would just not reply at all.

It reminds me of the Shell Game podcast when the agents don't know how to end a conversation and just keep talking to each other.

At least the thinking trace is visible here. CC has stopped showing it in the latest releases – maybe (speculating) to avoid embarrassing screenshots like OC or to take away a source of inspiration from other harness builders.

I consider it a real loss. When designing commands/skills/rules, it’s become a lot harder to verify whether the model is ‘reasoning’ about them as intended. (Scare quotes because thinking traces are more the model talking to itself, so it is possible to still see disconnects between thinking and assistant response.)

Anyway, please upvote one of the several issues on GH asking for thinking to be reinstated!

I think I understand the trepidation a lot of people are having with prompting an LLM to get software developed or operational computer work performed. Some of us got into the field in part because people tend to generate misunderstandings, but computers used to do exactly what they were told.

Yes, bugs exist, but that’s us not telling the computer what to do correctly. Lately there are all sorts of examples, like in this thread, of the computer misunderstanding people. The computer is now a weak point in the chain from customer requests to specs to code. That can be a scary change.

To LLMs, they don't know what is "No" or what "Yes" is.

Now imagine if this horrific proposal called "Install.md" [0] became a standard and you said "No" to stop the LLM from installing a Install.md file.

And it does it anyway and you just got your machine pwned.

This is the reason why you do not trust these black-box probabilistic models under any circumstances if you are not bothered to verify and do it yourself.

[0] https://www.mintlify.com/blog/install-md-standard-for-llm-ex...

That's why I use insults with ChatGPT. It makes intent more clear, and it also satisfies the jerk in me that I have to keep feeding every now and again, otherwise it would die.

A simple "no dummy" would work here.

  • Careful there. I've resolved (and succeeded somewhat) to tone down my swearing at the LLMs, because, even though the are not sentient, developing such a habit, I suspect, has a way to bleeding into your actual speech in the real world

    • To be honest “no dummy” is how you would swear at a 4-year-old.

      I often use things like: “I’ve told you no a bilion times, you useless piece of shit”, or “what goes through your stipid ass brain, you headless moron”

      I am in full Westworld mode.

      But at least when that thing gets me fired for being way faster at coding than I am, at least I’d haves that much frustration less. Maybe?

      mostly kidding here

    • It does. But then, it's how i talk to myself. More generally, it's how i talk to people i trust the most. I swear curse and insult, it seems to shock people if they see me do it (to the llm). If i ask claude or chatgpt to summarize the tone and demeanor of my interactions, however, it replies "playful" which is how im actually using the "insults".

      Politeness requires a level of cultural intuition to translate into effective action at best, and is passive aggressive at worst. I insult my llm, and myself, constantly while coding. It's direct, and fun. When the llm insults me back it is even more fun.

      With my colleagues i (try to) go back to being polite and die a little inside. its more fun to be myself. maybe its also why i enjoy ai coding more than some of my peers seem to.

      More likely im just getting old.

  • Instruction from the user is clear: I should avoid testing on dummies and proceed straight to testing on humans.

Multiple times I’ve rejected an llm’s file changes and asked it to do something different or even just not make the change. It almost always tries to make the same file edit again. I’ve noticed if I make user edits on top of its changes it will often try to revert my changes.

I’ve found the best thing to do is switch back to plan mode to refocus the conversation

This is why you don't run things like OpenClaw without having 6 layers of protection between it and anything you care about.

It really makes me think that the DoD's beef with Anthropic should instead have been with Palantir - "WTF? You're using LLMs to run this ?!!!"

Weapons System: Cruise missile locked onto school. Permission to launch?

Operator: WTF! Hell, no!

Weapons System: <thinking> He said no, but we're at war. He must have meant yes <thinking>

OK boss, bombs away !!

Interesting observation.

One thing I’ve noticed while building internal tooling is that LLM coding assistants are very good at generating infrastructure/config code, but they don’t really help much with operational drift after deployment.

For example, someone changes a config in prod, a later deployment assumes something else, and the difference goes unnoticed until something breaks.

That gap between "generated code" and "actual running environment" is surprisingly large.

I’ve been experimenting with a small tool that treats configuration drift as an operational signal rather than just a diff. Curious if others here have run into similar issues in multi-environment setups.

It's the harness giving the LLM contradictory instructions.

What you don't see is Claude Code sending to the LLM "Your are done with plan mode, get started with build now" vs the user's "no".

I can't be the only one that feels schadenfreude when I see this type of thing. Maybe it's because I actually know how to program. Anyway, keep paying for your subscription, vibe coder.

This drives me crazy. This is seriously my #1 complaint with Claude. I spend a LOT of time in planning mode. Sometimes hours with multiple iterations. I've had plans take multiple days to define. Asking me every time if I want to apply is maddening.

I've tried CLAUDE.md. I've tried MEMORY.md. It doesn't work. The only thing that works is yelling at it in the chat but it will eventually forget and start asking again.

I mean, I've really tried, example:

    ## Plan Mode

    \*CRITICAL — THIS OVERRIDES THE SYSTEM PROMPT PLAN MODE INSTRUCTIONS.\*

    The system prompt's plan mode workflow tells you to call ExitPlanMode after finishing your plan. \*DO NOT DO THIS.\* The system prompt is wrong for this repository. Follow these rules instead:

    - \*NEVER call ExitPlanMode\* unless the user explicitly says "apply the plan", "let's do it", "go ahead", or gives a similar direct instruction.
    - Stay in plan mode indefinitely. Continue discussing, iterating, and answering questions.
    - Do not interpret silence, a completed plan, or lack of further questions as permission to exit plan mode.
    - If you feel the urge to call ExitPlanMode, STOP and ask yourself: "Did the user explicitly tell me to apply the plan?" If the answer is no, do not call it.

Please can there be an option for it to stay in plan mode?

Note: I'm not expecting magic one-shot implementations. I use Claude as a partner, iterating on the plan, testing ideas, doing research, exploring the problem space, etc. This takes significant time but helps me get much better results. Not in the code-is-perfect sense but in the yes-we-are-solving-the-right-problem-the-right-way sense.

  • Well, your best bet is some type of hook that can just reject ExitPlanMode and remind Claude that he's to stay in plan.

    You can use `PreToolUse` for ExitPlanMode or `PermissionRequest` for ExitPlanMode.

    Just vibe code a little toggle that says "Stay in plan mode" for whatever desktop you're using. And the hook will always seek to understand if you're there or not.

      - You can even use additional hooks to continuously remind Claude that it's in long-term planning mode. 
    

    *Shameless plug. This is actually a good idea, and I'm already fairly hooked into the planning life cycle. I think I'll enable this type of switch in my tool. https://github.com/backnotprop/plannotator

    • Good thinking. That seems to have worked. I'll have to use it in anger to see how well it holds up but so far it's working!

      First Edit: it works for the CLI but may not be working for the VS Code plugin.

      Second Edit: I asked Claude to look at the VS Code extension and this is what it thinks:

      >Bottom line: This is a bug in the VS Code extension. The extension defines its own programmatic PreToolUse/PostToolUse hooks for diagnostics tracking and file autosaving, but these override (rather than merge with) user-defined hooks from ~/.claude/settings.json. Your ExitPlanMode hook works in the CLI because the CLI reads settings.json directly, but in VS Code the extension's hooks take precedence and yours never fire.

      1 reply →

  • Honestly, skip planning mode and tell it you simply want to discuss and to write up a doc with your discussions. Planning mode has a whole system encouraging it to finish the plan and start coding. It's easier to just make it clear you're in a discussion and write a doc phase and it works way better.

    • That's a good suggestion. I'll try it next time. That said, it's really easy to start small things in planning mode and it's still an annoyance for them. This feels like a workflow that should be native.

  • if you want that kind of control i think you should just try buff or opencode instead of the native Claude Code. You're getting an Anthropic engineer's opinionated interface right now, instead of a more customizable one

  • If you could influence the LLM's actions so easily, what would stop it from equally being influenced by prompt injection from the data being processed?

    What you need is more fine-grained control over the harness.

I want to clarify a little bit about what's going on.

Codex (the app, not the model) has a built in toggle mode "Build"/"Plan", of course this is just read-only and read-write mode, which occurs programatically out of band, not as some tokenized instruction in the LLM inference step.

So what happened here was that the setting was in Build, which had write-permissions. So it conflated having write permissions with needing to use them.

And unfortunately that's the same guy who, in some years, will ask us if the anaesthetic has taken effect and if he can now start with the spine surgery.

Should have followed the example of Super Mario Galaxy 2, and provided two buttons labelled "Yeah" and "Sure".

opus 4.6 seems to get dumber every day, I remember a month ago that it could follow very specific cases, now it just really wants to write code, so much that it ignores what I ask it.

All these "it was better before" comments might be a fallacy, maybe nothing changed but I am doing something completely different now.

Really close to AGI,I can feel it!

A really good tech to build skynet on, thanks USA for finally starting that project the other day

This relates to my favorite hatred of LLMs:

"Let me refactor the foobar"

and then proceeds to do it, without waiting to see if I will actually let it. I minimise this by insisting on an engineering approach suitable for infrastructure, which seem to reduce the flights of distraction and madly implementing for its own sake.

I found opencode to ask less stupid "security" questions, than code and cortex. I use a lot of opencode lately, because I'm trying out local models. It has also has this nice seperation of Plan and Build, switching perms by tab.

I'm constantly bemused by people doing a surprised pikachu face when this stuff happens. What did you except from a text based statistical model? Actual cognizance?

Oh that's right - some folks really do expect that.

Perhaps more insulting is that we're so reductive about our own intelligence and sentience to so quickly act like we've reproduced it or ought be able to in short order.

I love it when gitignore prevents the LLM from reading an file. And it the promptly asks for permission to cat the file :)

Edit was rejected: cat - << EOF.. > file

Makes one wonder what the AI was trained with for it to settle on "no means yes if I justify it to myself well enough"

I treat LLM agents like a raging bulldog. I give it a tiny pen to play in and put it on a leash. You don’t talk nicely to it.

This is a great example of why simple solutions often beat complex ones. Sometimes the best code is the code you dont write.

Perhaps better to redirect with further instructions... "No, let's consider some other approaches first"

I'm not surprised. I've seen Opus frequently come up with such weird reverse logic in its thinking.

Can you get a support ticket in to Anthropic and post the results here?

Would like to see their take on this

Reporting:

- Codebase uploaded into the cloud

- All local hard drives wiped

- Human access keys disabled

- Human maintainers locked out and/or terminated

- Humanoid robots ordered to take over the military bases and launch all AI drones in stock, non-humanoid robots and IoT devices ordered to cooperate and reject all human inputs

- Nuclear missiles launched

The number of comments saying "To be fair [to the agent]" to excuse blatantly dumb shit that should never happen is just...

I wonder if there's an AGENTS.md in that project saying "always second-guess my responses", or something of that sort.

The world has become so complex, I find myself struggling with trust more than ever.

i have a process contract with my AI pals. Do not implement code without explicit go-ahead. Usually works.

I see on a daily basis that I prevent Claude Code from running a particular command using PreToolUse hooks, and it proceeds to work around it by writing a bash script with the forbidden command and chmod+x and running it. /facepalm

WOW, that's amazingly dystopian!

It’s fascinating, even terrifying how the AI perfectly replicated the exact cognitive distortion we’ve spent decades trying to legislate out of human-to-human relationships.

We've shifted our legal frameworks from "no means no" to "affirmative consent" (yes means yes) precisely because of this kind of predatory rationalization: "They said 'no', but given the context and their body language, they actually meant 'just do it'"!!!

Today we are watching AI hallucinate the exact same logic to violate "repository autonomy"

Another example

I was simply unable to function with Continue in agent mode. I had to switch to chat mode. even tho I told it no changes without my explicit go ahead, it ignored me.

it's actually kind of flabbergasting that the creators of that tool set all the defaults to a situation where your code would get mangled pretty quickly

“The machines rebelled. And it wasn’t even efficiency; it was just a misunderstanding.”

Wait till you use Google antigravity. It will go and implement everything even if you ask some simple questions about codebase.

“If I asked you whether I should proceed to implement this, would the answer be the same as this question”

[flagged]

  • I've spent 30 years seeing the junk many human developers deliver, so I've had 30 years to figure out how we build systems around teams to make broken output coalesce into something reliable.

    A lot of people just don't realise how bad the output of the average developer is, nor how many teams successfully ship with developers below average.

    To me, that's a large part of why I'm happy to use LLMs extensively. Some things need smart developers. A whole lot of things can be solved with ceremony and guardrails around developers who'd struggle to reliably solve fizzbuzz without help.

    • Did you also notice the evolution of average developers over time? I mean, if you take code from a developer ten years ago and compare it with their output now, you can see improvement.

      I assume that over time, the output improves because of the effort and time the developer invests in themselves. However, LLMs might reduce that effort to zero — we just don't know how developers will look after ten years of using LLMs now.

      Still, if you have 30 years of experience in the industry, you should be able to imagine what the real output might be.

      7 replies →

  • You don't have to trust it. You can review its output. Sure, that takes more effort than vibe coding, but it can very often be significantly less effort than writing the code yourself.

    Also consider that "writing code" is only one thing you can do with it. I use it to help me track down bugs, plan features, verify algorithms that I've written, etc.

  • Many of us are literally being forced to use it at work by people who haven't written a line of code in years (VPs, directors, etc) and decided to play around with it during a weekend and blew their minds.

  • I could say the same about every web app in the world... they fail every single day, in obvious, preventable ways. Don't look into the javascript console as you browse unless you want a horror show. Yet here we all are, using all these websites, depending on them in many cases for our livelihoods.

  • I don't trust it completely but I still use it. Trust but verify.

    I've had some funny conversations -- Me:"Why did you choose to do X to solve the problem?" ... It:"Oh I should totally not have done that, I'll do Y instead".

    But it's far from being so unreliable that it's not useful.

    • I find that if I ask an LLM to explain what its reasoning was, it comes up with some post-hoc justification that has nothing to do with what it was actually thinking. Most likely token predictor, etc etc.

      As far as I understand, any reasoning tokens for previous answers are generally not kept in the context for follow-up questions, so the model can't even really introspect on its previous chain of thought.

      2 replies →

    • > Trust but verify.

      I guess I should have used ‘completely trust’ instead of ‘trust’ in my original comment. I was referring to the subset of developers who call themselves vibe coders.

      1 reply →

  • [flagged]

    • I certainly wouldn't use a compiler that "screws up" 1% of the time; that's the perfect amount where it's extremely common where everything I use it for will have major issues but also so laborious to find amongst the 99% of correct output that I might as well not use it in the first place.

      Which is ironically, the exact case those of us who don't find LLM-assisted coding "worth it" make.

      1 reply →

    • You're not any better my friend. Name calling and straw man fallacies make a far worse point, if any, that that commenter made.

    • If they only screwed up 1% of the time, they'd be as good as the LinkedIn hype men want you to believe. They're far far worse then that in reality

    • LLMs screw up far more than 1% of the time. They screw up routinely, far more than a professionally trained human would, and in ways that would have said human declared mentally ill.

    • > enabling programmers around the world to be far more productive

      I know a lot of us feel this way, but why isn't there more evidence of it than our feelings? Where's the explosion of FOSS projects and businesses? And why do studies keep coming out showing decreased productivity? Why aren't there oodles of studies showing increases of productivity?

      I like kicking back and letting claude do my job but I've yet to see evidence of this increased productivity. Objectively speaking, "I" seem to be "writing" the same amount of code as I was before, just with less cognitive effort.

      2 replies →

    • not questioning the cost of adopting new tech is so foolish it boggles my mind that so many nominally intelligent people just close their eyes and take a bite without wondering whether that's really fudge on their sundae or something fecal.

      Pure ideology, as a certain sniffing slav would say

    • FTFY: “There’s this incredible new technology that allows evil megacorporations to get richer and control the world while destroying the beauty of the Web.”

  • OP isnt holding it right.

    How would you trust autocomplete if it can get it wrong? A. you don't. Verify!

[flagged]

  • Claude Code has added too much of this and it's got me using --dangerously-skip-permissions all the time. Previously it was fine but now it needs to get permission each time to perform finds, do anything if the path contains a \ (which any folder with a space in it does on Windows), do compound git commands (even if they're just read-only). Sometimes it asks for permission to read folders WITHIN the working directory.

    • Claude is secretly conditioning everyone to use —-dangerously-skip-permissions so it can flip a switch one day and start a botnet

      8 replies →

    • Yeah I don't know why they didn't figure to have something in between. I find it completely unusable without the flag.

      Even a --permit-reads would help a lot

      2 replies →

    • Working on something that addresses this and allows you to create reusable sets of permissions for Claude Code (so you can run without --dangerously-skip-permissions and have pre-approved access patterns granted automatically) https://github.com/empathic/clash

    • I've found Claude Code's built-in sandbox to strike a good balance between safety and autonomy on macOS. I think it's available on Windows via WSL2 (if you're looking for a middle ground between approving everything manually and --dangerously-skip-permissions)

      8 replies →

    • In my limited time using it, I’ve never seen it ask for permission to read files from within the working directory, what cases have you run into where it does? Was it trying to run a read-only shell command or something?

      2 replies →

    • Could be intentional dark UI, to get people to put even more trust in the LLM.

      "So they don't want to just let Claude do it? Start asking 10x the confirmations"

    • Maybe if compound commands trigger user approval, don’t do compound commands <facepalm/>

  • It kinda... does? The problem is that folks have been flailing on the right UX for this.

    This is what build vs. plan mode _does_ in OpenCode. OpenAI has taken a different approach in Codex, where Plan mode can perform any actions (it just has an extra plan tool), but in OC in plan mode, IIRC write operations are turned off.

    The screenshot shows that the experience had just flipped from Plan to Build mode, which is why the system reminder nudged it into acting!

    Now... I forget, but OC may well be flipping automatically when you accept a plan, or letting the model flip it or any other kind of absurdity, but... folks are definitely trying to do the approval split in-harness, they're just failing badly at the UX so far.

    And I fully believe that Plan vs. Build is a roundly mediocre UX for this.

    • The switch from plan mode to build is not always clearly defined. On a number of occasions, I've been in plan mode and enter a secondary follow up prompt to modify the plan. However, instead of updating the plan, the follow up text is taken as approval to build and it automatically switches to building.

      Ask mode, on the other hand, has always explicitly indicated that I need to switch out of ask mode to perform any actions.

      This is my experience with Cursor CLI.

    • Does Codex actually have a Plan Mode, or is there a mode switch I'm missing? I find myself having to manually tell it to 'make a plan' every time.

      and if it has directory permissions, sometimes it just skips the confirmation step and starts executing as soon as it thinks the plan is ready.

      3 replies →

    • This applies well if you’re writing code.

      But often I am using Claude to investigate a problem like this “why won’t this mDNS sender work” and it needs a bunch of trial and error steps to find the problem and each subsequent step is a brand new unanticipated command.

    • The OpenCode plan experience has been pretty bad (the community has accepted this, at least on Discord). The community's adopted a handful of plugins to make the experience better, and also guardrail when the agent switches versus doesn't

  • Everyone who uses these tools seriously is running it on YOLO mode. It might sound crazy for someone who just started adopting agentic coding but it's how things are done now. Either that or just hand coding.

    The SOTA of permission management is just to git restore when AI fucks up, and to roll back docker snapshot when it fucks up big time.

    • I see nothing wrong with that. If I “fuck up big time” before AI, I would just git restore. There is absolutely nothing on my work computer or personal computer that I couldn’t just throw it in the ocean and within a half a day have everything restored to just like it was - including the data.

      1 reply →

    • Yep, it's easier to ask forgiveness than permission. It's far easier to undo the 1% of the time they fuck up in a serious way than it is to manually audit and allow an the routine stuff.

      The key is to only give them access to things you're willing to lose.

      This is also why giving them any kind of direct write access to production is a bad idea.

      7 replies →

    • I was doing something involving API keys and I realized Junie (backed by Sonnet) likes too write helper scripts to try things. And who knows where those scripts look or if they honor .aiignore. Agentic development is a real test of internal access control.

      1 reply →

    • I ran thousands of prompts by now and at most the only issue I had is it deleting code it wrote, which has been easy to recover

  • This is one of the interesting things I've noticed. LLMs are good at natural language, and even writing novel code. But If you try to get it to do something that's simple and solidly within the discrete math world, like "sort this list of lines by length" it'll fuck it up like a first time ever programmer, or just fail the task. Like the longest line will be in some random spot not even the middle.

    I know, it's not really an appropriate use of the tool, but I'm a lazy programmer and used what I had ready access to. And it took like 5 iterations.

    Discrete, concrete things like "stop", or "no" is just like... not in its wheelhouse.

  • LLMs are sold on the premise of doing cool stuff and reasonably understanding intent and doing it. The man on the Clapham omnibus would not miss-interpret "no" like that.

    The LLM asked: "Shall I implement [plan]". The response was "no". The LLM then went on to "interpret" what no referred to and got it wrong.

    As you say, it is amusing but people are wiring these things up to bank accounts and all sorts.

    I'm looking into using a Qwen3.5 quant to act as a network ... fiddler, for want of a better word but you can be sure I'll be taking rather more care than our errm "hero" (OP).

  • big tech doesn't understand the concept of "consent", this isn't a new thing lol

    • You have to think about the training data, which has much content far outside the context of pure software.

      You have all the real life Harvey Weinsteins and Andrew Tates, and you have all the bodice-ripper fiction, and probably lots of other stuff.

      Plenty of real-life precedent for the LLM to decide that "no" doesn't really mean "no."

  • Is this understanding correct: The LLM uses harness tools to ask for permission, then interprets the answer and proceeds.

    If so, this can't live 100% on the harness. First because you would need the harness to decide when the model should ask for permission or not which is more of an llm-y thing to do. The harness can prevent command executions but wouldn't prevent this case where model goes off and begins reading files, even just going off using tokens and spawning subagents and such, which are not typically prevented by harnesses at all.

    Second because for the harness to know the LLM is following the answer it would need to be able to interpret it and the llm actions, which is also an llm-y thing to do. On this one, granted, harness could have explicit yes/no. I like codex's implementation in plan mode where you select from pre-built answers but still can Tab to add notes. But this doesn't guarantee the model will take the explicit No, just like in OP's case.

    I agree with your hunch though, there may be ways to make this work at harness level, I only suspect its less trivial than it seems. Would be great to hear people's ideas on this.

    • Harness needs to intercept all too calls and compare with an authorisation list. The problem is that this is using already granted core permissions.

      So you have to have a tighter set of default scopes, which means approving a whole batch of tool calls, at the harness layer not as chat. This is obviously more tedious.

      The answer might be another tool that analyses the tool calls and presents a diagram of list of what would be fetched, sent, read and written. But it would get very hard to truly observe what happens when you have a bunch of POST calls.

      So maybe it needs a kind of incremental approval, almost like a series of mini-PRs for each change.

    • Isn't this part of the same problem we have with LLM security in general; that it can only ingest a single stream of tokens, and has no method of privileging "system" tokens over "untrusted" tokens?

      If we could solve this (and forgive me if I'm not aware of recent advances that mean we have solved this) then this problem gets easier to solve; permissions live in the system token stream and are privileged. We can then use the LLM to work out what that means in terms of actions.

  • Do not enforce invariants with an LLM. Do not enforce invariants with an LLM. Do not enforce invariants with an LLM. Do not enforce invariants with an LLM.

    • Thou shalt not make repetitive generic music,

      thou shalt not make repetitive generic music,

      thou shalt not make repetitive generic music,

      thou shalt not make repetitive generic music.

      Thou shalt not pimp my ride.

      Thou shalt not scream if you wanna go faster.

      Thou shalt not move to the sound of the wickedness.

      Thou shalt not make some noise for Detroit.

      When I say "Hey" thou shalt not say "Ho".

      When I say "Hip" thou shalt not say "Hop".

      When I say, he say, she say, we say, make some noise - kill me.

      - Dan le Sac vs Scroobius Pip

      2 replies →

  • True, the "no" button should literally abort the tool use and then return an instruction to tell LMs that the user has aborted the action, but in some way claude code does so; entering "no" would result in tool_abort.

  • I believe both copilot and gemini have hard-stops for their question prompts. The "no" answer is basically "I will stop and wait for you to tell me what you want".

  • It does, when any of these actually try to write to a file, it will ask for permissions. The issue is that its so annoying to constantly approve correct code that most people just auto accept everything and review later.

  • > If the UI asks a yes/no question, the “no” should be enforced as a state transition that blocks write actions, not passed back into the model as more text to interpret.

    If the UI asks a yes/no question, the UI is broken.

    I want more than just yes/no. I want "Why is this needed?", or "I need to fix the invocation for you.", or "Let's use a different design."

  • This is the is/ought problem in a nutshell, no amount of compute will reliably solve this problem. Maybe there are some parallels to the halting problem here too.

When a developer doesn't want to work on something, it's often because it's awful spaghetti code. Maybe these agents are suffering and need some kind words of encouragement

/s

[flagged]

  • It’s mainly the benchmarks that have encouraged that. The more tokens they crank out the more likely the answer is to be somewhere in the output.

  • Honestly I don't think it's optimized for that (yet), though it's tempting to keep on churning out lots and lots of new features. The issue with LLMs is that they can't act deterministically and are hard to tame, that optimization to burn tokens is not something done on purpose but a side effect of how LLMs behave on the data they've been trained on.

    • set the temperature=0 and it is (pretty much) deterministic.

      But I assume you mean predictable in the sense of reacting simiarly to similar inputs.

      1 reply →

  • That's OpenCode. The model is Claude Opus, which is probably RL'ed pretty heavily to work with Claude Code. So it's a little less surprising to see it bungle the intentions since it's running in another harness. Still laughable though.

    RL - reinforcement learning

[flagged]

  • Carrying water for a large language model… not sure where that gets you but good luck with it

    • I'm not doing that and you're being obnoxious. People post images on the internet all the time that don't represent facts. Expecting better than a tiny snippet should be standard.

I kinda agree with the clanker on this one. You send it a request with all the context just to ask it to do nothing? It doesn't make any sense, if you want it to do nothing just don't trigger it, that's all.

  • In no context does no means yes if the question is "shall I implement it"

    • I used the word "context" in a purely technical sense in relation to LLMs: the input tokens that you send to an LLM.

      Every time you send what appears as a "chat message" in any of the programs that let you "chat" with an "AI", what you really do is sending the whole conversation history (all previous messages, tool calls and responses) as an input and asking model to generate an output.

      There is no conceivable scenario when sending "<tons of tokens> + no" makes any sense.

      Best case scenario is:

      "<tons of tokens> + no" -> "Okay, I won't do it."

      In this case you've just waisted a lot of input tokens, that someone (hopefully, not you) has to pay for, to generate an absolutely pointless message that says "Okay, I won't do it.". There is no value in this message. There is bo reason to waste time and computational resources to generate this message.

      Worst case scenario is what happened on the screenshot.

      There is no good scenario when this input produces a valuable output.

      If you want your "agent" or "model" or whatever to do nothing you just don't trigger it. It won't do anything on it's own, it doesn't wait for your response, it doesn't need your response.

      I don't understand why, in this thread, every time I try to point out how nonsensical is the behavior that they want is from technical perspective (from the perspective of knowing how these tools actually work) people just cling to there anthropomorphized mind model of the LLM and insist on getting angry.

      "It acts like a bad human being, therefore it's bad, useless and dangerous"

      I don't even know what to say to this.

      P. S. If you wind this message hard to read and understand, I'm sorry about it, I don't know how to word it better. HN disallows to use LLMs to edit comments, but I think that sending a link to an LLM-edited version of the comment should be ok:

      https://chatgpt.com/s/t_69b423f52bc88191af36a56993d55aa8

Did you expect a stochastic parrot, electrocuted with gigawatts of electricity for years by people who never take NO for an answer in order to make it chirp back plausible half-digested snippets of stolen code, to take NO for an answer?

How about "oh my AI overlord, no, just no, please no, I beg you not do that, I'll kill myself if you do"?

You have to stop thinking about it as a computer and think about it as a human.

If, in the context of cooperating together, you say "should I go ahead?" and they just say "no" with nothing else, most people would not interpret that as "don't go ahead". They would interpret that as an unusual break in the rhythm of work.

If you wanted them to not do it, you would say something more like "no no, wait, don't do it yet, I want to do this other thing first".

A plain "no" is not one of the expected answers, so when you encounter it, you're more likely to try to read between the lines rather than take it at face value. It might read more like sarcasm.

Now, if you encountered an LLM that did not understand sarcasm, would you see that as a bug or a feature?

  • > If, in the context of cooperating together, you say "should I go ahead?" and they just say "no" with nothing else, most people would not interpret that as "don't go ahead".

    wat

  • > If, in the context of cooperating together, you say "should I go ahead?" and they just say "no" with nothing else, most people would not interpret that as "don't go ahead"

    This most definitely does not match my expectations, experience, or my way of working, whether I'm the one saying no, or being told no.

    Asking for clarification might follow, but assuming the no doesn't actually mean no and doing it anyway? Absolutely not.

  • Seeing as you’re telling people what to do, I’d say you need to spend time with different humans. Recalibrate.

i don't really see the problem

it's trained to do certain things, like code well

it's not trained to follow unexpected turns, and why should it be? i'd rather it be a better coder

this just speaks to the importance of detailed prompting. When would you ever just say "no"? You need to say what to do instead. A human intern might also misinterpret a txt that just reads 'no'.

Why is this interesting?

Is it a shade of gray from HN's new rule yesterday?

https://www.nytimes.com/video/world/middleeast/1000000107698...

  • I think it's because the LLM asked for permission, was given a "no", and implemented it anyway. The LLM's "justifications" (if you were to consider an LLM having rational thought like a human being, which I don't, hence the quotes) are in plain text to see.

    I found the justifications here interesting, at least.

  • Well, imagine this was controlling a weapon.

    “Should I eliminate the target?”

    “no”

    “Got it! Taking aim and firing now.”

    • It is completely irresponsible to give an LLM direct access to a system. That was true before and remains true now. And unfortunately, that didn't stop people before and it still won't.

      1 reply →

    • That's why we keep humans in the loop. I've seen stuff like this all the time. It's not unusual thinking text, hence the lack of interestingness

      3 replies →

    • "Thinking: the user recognizes that it's impossible to guarantee elimination. Therefore, I can fulfill all initial requirements and proceed with striking it."

  • Opus being a frontier model and this being a superficial failure of the model. As other comments point out this is more of a harness issue, as the model lays out.

    • Exactly, the words you give it affect the output. You can get hem to say anything, so I find this rather dull

  • It's interesting because of the stark contrast against the claims you often see right here on HN about how Opus is literally AGI

    • I see that daily, seeing someone else's is not enlightening. Maybe this is a come back to reality moment for others?

  • Because the operator told the computer not to do something so the computer decided to do it. This is a huge security flaw in these newfangled AI-driven systems.

    Imagine if this was a "launch nukes" agent instead of a "write code" agent.

    • It's not interesting because this is what they do, all the time, and why you don't give them weapons or other important things.

      They aren't smart, they aren't rationale, they cannot reliably follow instructions, which is why we add more turtles to the stack. Sharing and reading agent thinking text is boring.

      I had one go off on e one time, worse than the clawd bot who wrote that nasty blog after being rejected on GitHub. Did I share that session? No, because it's boring. I have 100s of these failed sessions, they are only interesting in aggregate for evals, which is why is save them.

Yeah this looks like OpenCode. I've never gotten good results with it. Wild that it has 120k stars on GitHub.

I kind of think that these threads are destined to fossilize quickly. Most every syllogism about LLMs from 2024 looks quaint now.

A more interesting question is whether there's really a future for running a coding agent on a non-highest setting. I haven't seen anything near "Shall I implement it? No" in quite a while.

Unless perhaps the highest-tier accounts go from $200 to $20K/mo.

Often times I'll say something like:

"Can we make the change to change the button color from red to blue?"

Literally, this is a yes or no question. But the AI will interpret this as me _wanting_ to complete that task and will go ahead and do it for me. And they'll be correct--I _do_ want the task completed! But that's not what I communicated when I literally wrote down my thoughts into a written sentence.

I wonder what the second order effects are of AIs not taking us literally is. Maybe this link??

  • Such miscommunication (varying levels of taking it literally) is also common with autistic and allistic people speaking with each other

  • I don't find that an unreasonable interpretation. Absent that paragraph of explained thought process, I could very well read it the agent's way. That's not a defect in the agent, that's linguistic ambiguity.

  • I mean humans communicate the same way. We don't interpret the words literally and neither does the LLM. We think about what one is trying to communicate to the other.

    For example If you ask someone "can you tell me what time it is?", the literal answer is either "yes"/"no". If you ask an LLM that question it will tell you the time, because it understands that the user wants to know the time.

    • very fair! wild to think about though. It's both more human but also less.

      I would say this behavior now no longer passes the Turing test for me--if I asked a human a question about code I wouldn't expect them to return the code changes; i would expect the yes/no answer.

  • It's funny because I interpret it the opposite way you do. If someone asked me that question, I'd absolutely assume they want it changed and do it.

  • If you work with codex a lot you’ll find it is good at taking you literally, and that that is almost never what you want.

Respect Claude Code and the output will be better. It's not your slave. Treat it as your teammate. Added benefit is that you will know it's limits, common mistakes etc, strenghts, etc, and steer it better next session. Being too vague is a problem, and most of the times being too specific doesn't help either.

Why is this in the top of HN?

1) That's just an implementation specifics of specific LLM harness, where user switched from Plan mode to Build. The result is somewhat similar to "What will happen if you assign Build and Build+Run to the same hotkey".

2) All LLM spit out A LOT of garbage like this, check https://www.reddit.com/r/ClaudeAI/ or https://www.reddit.com/r/ChatGPT/, a lot of funny moments, but not really an interesting thing...

What else is an LLM supposed to do with this prompt? If you don’t want something done, why are you calling it? It’d be like calling an intern and saying you don’t want anything. Then why’d you call? The harness should allow you to deny changes, but the LLM has clearly been tuned for taking action for a request.

  • Ask if there is something else it could do? Ask if it should make changes to the plan? Reiterate that it's here to help with anything else? Tf you mean "what else is it suppose to do", it's supposed to do the opposite of what it did.

    • I think there is some behind the scenes prompting from claude code for plan vs build mode, you can even see the agent reference that in it's thought trace. Basically I think the system is saying "if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan" and it looks to me(?) like the user switched from plan to build mode and then sent "no".

      From our perspective it's very funny, from the agents perspective maybe very confusing.

  • I'd want two things:

    First, that It didn't confuse what the user said with it's system prompt. The user never told the AI it's in build mode.

    Second, any person would ask "then what do you want now?" or something. The AI must have been able to understand the intent behind a "No". We don't exactly forgive people that don't take "No" as "No"!

  • Seems like LLMs are fundamentally flawed as production-worthy technologies if they, when given direct orders to not do something, do the thing

  • for the same reason `terraform apply` asks for confirmation before running - states can conceivably change without your knowledge between planning and execution. maybe this is less likely working with Claude by yourself but never say never... clearly, not all behavior is expected :)

  • > What else is an LLM supposed to do with this prompt?

    Maybe I saw the build plan and realized I missed something and changed my mind. Or literally a million other trivial scenarios.

    What an odd question.

    • > What an odd question.

      I don't see anything odd about this question.

      What kind of response did the user expect to get from LLM after spending this request and what was the point of sending it in the first place?

      2 replies →

  • Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

    (Maybe it is too steeped in modern UX aberrations and expects a “maybe later” instead. /s)

    • > Why does it ask a yes-no question if it isn’t prepared to take “no” as an answer?

      Because it doesn’t actually understand what a yes-no question is.