Changes in the system prompt between Claude Opus 4.6 and 4.7

1 day ago (simonwillison.net)

The past month made me realize I needed to make my codebase usable by other agents. I was mainly using Claude Code. I audited the codebase and identified the points where I was coupling to it and made a refactor so that I can use either codex, gemini or claude.

Here are a few changes:

1. AGENTS.md by default across the codebase, a script makes sure CLAUDE.md symlink present wherever there's an AGENTS.md file

2. Skills are now in a 'neutral' dir and per agent scripts make sure they are linked wherever the coding agent needs them to be (eg .claude/skills)

3. Hooks are now file listeners or git hooks, this one is trickier as some of these hooks are compensating/catering to the agent's capabilities

4. Subagents and commands also have their neutral folders and scripts to transform and linters to check they work

5. `agent` now randomly selects claude|codex|gemini instead of typing `claude` to start a coding session

I guess in general auditing where the codebase is coupled and keeping it neutral makes it easier to stop depending solely on specific providers. Makes me realize they don't really have a moat, all this took less than an hour probably.

  • I've been doing the same except that I'm done with Claude. Cancelled my subscription. I can't use a tool where the limits vary so wildly week to week, or maybe even day to day.

    So I'm migrating to pi. I realized that the hardest thing to migrate is hooks - I've built up an expensive collection of Claude hooks over the last few months and unlike skills, hooks are in Claude specific format. But I'd heard people say "just tell the agent to build an extension for pi" so I did. I pointed it at the Claude hooks folder and basically said make them work in pi, and it, very quickly.

    • I'm leaning in this direction. Recently slopforked pi to python and created a version that's basically a loop, an LLM call to openrouter and a hook system using pluggy. I have been able to one-shot pretty much any feature a coding agent has. Still toy project but this thread seems to be leading me towards mantaining my own harness. I have a feeling it will be just documenting features in other systems and maintaining evals/tests.

  • Have you got any advice in making agents from different providers work together?

    In Claude, I’ve seen cases in which spawning subagents from Gemini and Codex would raise strange permission errors (even if they don’t happen with other cli commands!), making claude silently continue impersonating the other agent. Only by thoroughly checking I was able to understand that actually the agent I wanted failed.

    • Not sure if you mean 1) sub-agent definitions (similar to skills in Claude Code) or 2) CLI scripts that use other coding agents (eg claude calling gemini via cli).

      For (1) I'm trying to come up with a simple enough definition that can be 'llm compiled' into each format. Permissions format requires something like this two and putting these together some more debugging.

      (2) the only one I've played with is `claude -p` and it seems to work for fairly complex stuff, but I run it with `--dangerously-skip-permissions`

    • It works out of the box with something like opencode. I've had no issue creating rather complex interactions between agents plugged into different models.

  • How do you share the context/progress of goal across agents?

    • I implemented a client for each so that the session history is easy to extract regarding the agent (somewhat related to progress of goal).

      Context: AGENTS.md is standard across all; and subdirectories have their AGENTS.md so in a way this is a tree of instructions. Skills are also standard so it's a bunch of indexable .md files that all agents can use.

> The new <acting_vs_clarifying> section includes: When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first.

Uff, I've tried stuff like these in my prompts, and the results are never good, I much prefer the agent to prompt me upfront to resolve that before it "attempts" whatever it wants, kind of surprised to see that they added that

  • I even have a specific, non-negotiable phase in the process where model MUST interview me, and create an interview file with everything captured. Plan file it produces must always include this file as an artifact and interview takes the highest precedence.

    Otherwise, the intent gets lost somewhere in the chat transcript.

    • The raw Q&A is essential. I think Q & Q works so we'll because it reveals how the model is "thinking" about what you're working on, which allows for correction and guidance upfront.

    • Are these your own skills files or are you using something off the shelf like bmad or specify-kit?

  • I've recently started adding something along the lines of "if you can't find or don't know something, don't assume. Ask me." It's helped cut down on me having to tell it to undo or redo things a fair amount. I also have used something like, "Other agents have made mistakes with this. You have to explain what you think we're doing so I can approve." It's kind of stupid to have to do this, but it really increases the quality of the output when you make it explain, correct mistakes, and iterate until it tells you the right outcome before it operates.

    Edit: forgot "don't assume"

  • I wonder if they're optimizing for metrics that look superficially-worse if the system asks questions about ambiguity early. I've had times where those questions tell me "ah, shit, this isn't the right path at all" and that abandoned session probably shows up in their usage stats. What would be much harder to get from the usage stats are "would I have been happier if I had to review a much bigger blob of output to realize it was underspecified in a breaking way?" But the answer has been uniformly "no." This, in fact, is one of the biggest things that has made it easier to use the tools in "lazy" ways compared to a year ago: they can help you with your up-front homework. But the dialogue is key.

    • Or they're optimizing for increased revenue? If Claude goes down a completely wrong path because it just assumes it knows what you want rather than asking you, and you have to undo everything and start again, that obviously uses much more tokens than if you would have been able to clarify the misunderstanding early on.

  • I usually need to remind it 5 times to do the opposite - because it makes decisions that I don't like or that are harmful to the project—so if it lands in Claude Code too, I have hard times ahead.

    I try to explicitly request Claude to ask me follow-up questions, especially multiple-choice ones (it explains possible paths nicely), but if I don't, or when it decides to ignore the instructions (which happens a lot), the results are either bad... or plain dangerous.

    • it is a big problem that many I know face every day. sometimes we are just wondering are we the dumb ones since the demo shows everything just works.

  • Dammit that’s why I could never get it to not try to one shot answers, it’s in the god damn system prompt… and it explains why no amount of user "system" prompt could fix this behavior.

  • With my use of Claude code, I find 4.7 to be pretty good about clarifying things. I hated 4.6 for not doing this and had generally kept using 4.5. Maybe they put this in the chat prompt to try to keep the experience similar to before? I definitely do not want this in Claude code.

    • I agree with your thoughts on 4.6.

      It's possible they tried to train this out of it for 4.7 and over corrected, and the addition to the system prompt is to rein it in a bit.

  • Having to "unprompt" behaviour I want that Anthropic thinks I don't want is getting out of hand. My system prompts always try to get Claude to clarify _more_.

  • Seriously, when you're conversing with a person would you prefer they start rambling on their own interpretation or would you prefer they ask you to clarify? The latter seems pretty natural and obvious.

    Edit: That said, it's entirely possible that large and sophisticated LLMs can invent some pretty bizarre but technically possible interpretations, so maybe this is to curb that tendency.

    • > The latter seems pretty natural and obvious.

      To me too, if something is ambigious or unclear when I'm getting something to do from someone, I need to ask them to clarify, anything else be borderline insane in my world.

      But I know so many people whose approach is basically "Well, you didn't clearly state/say X so clearly that was up to me to interpret however I wanted, usually the easiest/shortest way for me", which is exactly how LLMs seem to take prompts with ambigiouity too, unless you strongly prompt them to not "reasonable attempt now without asking questions".

    • —So what would theoretically happen if we flipped that big red switch?

      —Claude Code: FLIPS THE SWITCH, does not answer the question.

      Claude does that in React, constantly starting a wrong refactor. I’ve been using Claude for 4 weeks only, but for the last 10 days I’m getting anger issues at the new nerfing.

      2 replies →

  • > I've tried stuff like these in my prompts, and the results are never good

    I've found that Google AI Mode & Gemini are pretty good at "figuring it out". My queries are oft times just keywords.

> Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer.

I am strongly opinionated against this. I use Claude in some low-level projects where these answers are saving me from making really silly things, as well as serving as learning material along the way.

This should not be Anthropic's hardcoded choice to make. It should be an option, building the system prompt modularily.

I'm fascinated that Anthropic employees, who are supposed to be the LLM experts, are using tricks like these which go against how LLMs seem to work.

Key example for me was the "malware" tool call section that included a snippet with intent "if it's malware, refuse to edit the file". Yet because it appears dozens of times in a convo, eventually the LLM gets confused and will refuse to edit a file that is not malware.

I've resorted to using tweakcc to patch many of these well-intentioned sections and re-work them to avoid LLM pitfalls.

  • These aren't as much tricks as just one layer of defense. But prompting is useless, as you can use the API directly without these prompts.

    I run claude code with my own system prompt and toolings on top of it. tweakcc broke too often and had too many glitches.

The eating disorder section is kind of crazy. Are we going to incrementally add sections for every 'bad' human behaviour as time goes on?

  • They have to secretly add these guardrails on because the alternative would be to train the users out of consulting these things as if they are advanced all-knowing alien-technogawds. And that would be bad for business.

    The better solution I think would be a reality/personal responsibility approach, teach the consumers that the burden of interpretation is on them and not the magic 8ball. For example if your AI tells you to kill your parents or that you’ve discovered new math that makes time travel possible, etc then: 1. Stop 2. Unplug 3. Go outside 4. Ask a human for a sanity check.

    Since that would be bad for business and take a lot of effort on the user side (while being very embarrassing). Obviously can’t do that right before an IPO & in the middle of global economic war so secretive moral frameworks have to be installed.

    If you are what you eat then you believe what you consume. Ironically, I think this undisclosed and hidden moral shaping of billions of people will be the most dangerous. Imagine all the things we could do if we can just, ever-so-slightly, move the Overton window / goal posts on w/e topic day by day, prompt by prompt.

    Personally I find AI output insidiously disarming and charming and I think I’m in the norm. So while we’ve been besieged by propaganda since time immemorial I do worry that AI is a special case.

  • Just like someone growing up and learning how to interact with other humans might learn the same lesson?

    If Claude is going to be Claude, we should support these kind of additions.

  • Even better, adding it to the system prompt is a temporary fix, then they'll work it into post-training, so next model release will probably remove it from the system prompt. At least when it's in the system prompt we get some visibility into what's being censored, once it's in the model it'll be a lot harder to understand why "How many calories does 100g of Pasta have?" only returns "Sorry, I cannot divulge that information".

    • Just assume each model iteration incorporates all the censorship prompts before and compile the possible list from the system prompt history. To validate it, design an adversary test against the items in the compiled list.

  • That part of the system prompt is just stating that telling someone who has an actual eating disorder to start counting calories or micro-manage their eating in other ways (a suggestion that the model might well give to an average person for the sake of clear argument, which would then be understood sensibly and taken with a grain of salt) is likely to make them worse off, not better off. This seems like a common-sense addition. It should not trigger any excess refusals on its own.

    • The problem is that this is an incredibly niche / small issue (i.e. <<1% of users, let alone prompts, need this clarification), and if you add a section for every single small thing like this, you end up with a massively bloated prompt. Notice that every single user of Claude is paying for this paragraph now! This single paragraph is going to legitimately cost anthropic at least 4, maybe 5 digits.

      At some point you just have to accept that llm's, like people, make mistakes, and that's ok!

      8 replies →

    • > This seems like a common-sense addition.

      Mm, yes. Let's add mitigation for every possible psychological disorder under the sun to my Python coding context. Very common-sense.

      1 reply →

  • This. It's like the exaggerated safety instructions everywhere: "do not lean ladder on high voltage wires". Only worse: because you can choose to ignore such instructions when they don't apply, but Claude cannot.

    In the best case, wrapping users in cotton wool is annoying. In the worst case, it limits the usefulness of the tool.

  • When you are worth hundreds of billions, people start falling over themselves running to file lawsuits against you. We're already seeing this happen.

    So spending $50M to fund a team to weed out "food for crazies" becomes a no-brainer.

    • It is a no brainer. If a company of any size is putting out a product that caused cancer we wouldn't think twice about suing them. Why should mental health disorders be any different?

      10 replies →

    • It's so shameful.

      We let people buy kitchen knives. But because the kitchen knife companies don't have billions of dollars, we don't go after them.

      We go after the LLM that might have given someone bad diet advice or made them feel sad.

      Nevermind the huge marketing budget spent on making people feel inadequate, ugly, old, etc. That does way more harm than tricking an LLM into telling you you can cook with glue.

      2 replies →

  • Seems so, unless we manage to pivot to open weight models. Hopefully, Chinese will lead the way along with their consumer hardware.

    Hard for me to say this because I have always been pro-Western and suddenly it seems like the world has flipped.

  • It feels like half of AI research is math, and the other half is coming up with yet another way to state "please don't do bad things" in the prompt that will sure work this time I promise.

  • The alignment favors supporting healthy behaviors so it can be a thin line. I see the system prompt as "plan B" when they can't achieve good results in the training itself.

    It's a particularly sensitive issue so they are just probably being cautious.

    • I want a hyperscaler LLM I can fine tune and neuter. Not a platform or product. Raw weights hooked up to pure tools.

      This era of locked hyperscaler dominance needs to end.

      If a third tier LLM company made their weights available and they were within 80% of Opus, and they forced you to use their platform to deploy or license if you ran elsewhere, I'd be fine with that. As long as you can access and download the full raw weights and lobotomize as you see fit.

      1 reply →

  • Are the prompts used both by the desktop app, like typical chatbot interfaces, and Claude Code?

    Because it's a waste of my money to check whether my Object Pascal compiler doesn't develop eating disorders, on every turn.

  • In principle, they could make such responses part of their training data. I guess it is just easier to do it through prompting.

  • Could be that Claude has particular controversial opinions on eating disorders.

    • LLMs have been trained to eagerly answer a user’s query.

      They don’t reliably have the judgment to pause and proceed carefully if a delicate topic comes up. Hence these bandaids in the system prompt.

    • There are communities of people who publicly blog about their eating disorders. I wouldn't be surprised if the laymen's discourse is over-represented in the LLM's training data compared to the scientific papers.

  • I mean, that's what humans have always done with our morals, ethics, and laws, so what alternative improvement do you have to make here?

  • Imagine the kind of human that never adapts their moral standpoints. Ever. They believe what they believed when they were 12 years old.

    Letting the system improve over time is fine. System prompt is an inefficient place to do it, buts it's just a patch until the model can be updated.

  • Yup. Anyone who is surprised by this has not been paying attention to the centralization of power on the internet in the past 10 years.

I feel like we are at the point where the improvements at one area diminishes functionality in others. I see some things better in 4.7 and some in 4.6. I assume they’ll split in characters soon.

I'm curious as to why 4.7 seems obsessed with avoiding any actions that could help the user create or enhance malware. The system prompts seem similar on the matter, so I wonder if this is an early attempt by Anthropic to use steering vector injection?

The malware paranoia is so strong that my company has had to temporarily block use of 4.7 on our IDE of choice, as the model was behaving in a concerningly unaligned way, as well as spending large amounts of token budget contemplating whether any particular code or task was related to malware development (we are a relatively boring financial services entity - the jokes write themselves).

In one case I actually encountered a situation where I felt that the model was deliberately failing execute a particular task, and when queried the tool output that it was trying to abide by directives about malware. I know that model introspection reporting is of poor quality and unreliable, but in this specific case I did not 'hint' it in any way. This feels qualitatively like Claude Golden Gate Bridge territory, hence my earlier contemplation on steering vectors. I've been many other people online complaining about the malware paranoia too, especially on reddit, so I don't think it's just me!

  • Note that these are the "chat" system prompts - although it's not mentioned I would assume that Claude Code gets something significantly different, which might have more language about malware refusal (other coding tools would use the API and provide their own prompts).

    Of course it's also been noted that this seems to be a new base model, so the change could certainly be in the model itself.

  • No, you underestimate how huge the malware problem right now. People try publish fake download landing pages for shell scripts or even Claude code on https://playcode.io every day. They pay for google ads $$$ to be one the top 1 position. How Google ads allow this? They can’t verify every shell script.

    No I am not joking. Every time you install something, there is a risk you clicked a wrong page with the absolute same design.

    • Also increasing numbers of attacks against Anna's Archive with fake cloned front end web GUIs leading to malware scripts.

    • He's not talking about malware awareness. He's talking about a bug i've seen too which requires Claude justifying for *every* tool call to add extra malware-awareness turns. Like every file read of the repo we've been working on

  • I "fixed" this for myself with tweakcc which let's you patch the system prompts. I changed the malware part to just be "watch out for malware" and it's stopped being unaligned.

    They really should hand off read() tool calls to a lean cybersecurity model to identify if it's malware (separately from the main context), then take appropriate action.

  • Their marketing is going overtime into selling the image that their models are capable of creating uber sophisticated malware, so every single thing they do from here on out is going to have this fear mongering built in.

    Every statement they make, hell even the models themselves are going to be doing this theater of "Ooooh scary uber h4xx0r AI, you can only beat it if you use our Super Giga Pro 40x Plan!!". In a month or two they'll move onto some other thing as they always do.

  • Presumably because it has become extremely good at writing software, and if it succeeds at helping someone spread malware, especially one that could use Claude itself (via local user's plans) to self-modify and "stay alive", it would be nearly impossible to put back in the bottle.

    • That would put itself back in the bottle by running killall to fix a stuck task, or deleting all core logic and replacing it with a to-do to fix a test.

I knew these system prompts were getting big, but holy fuck. More than 60,000 words. With the 3/4 words per token rule of thumb, that's ~80k tokens. Even with 1M context window, that is approaching 10% and you haven't even had any user input yet. And it gets churned by every single request they receive. No wonder their infra costs keep ballooning. And most of it seems to be stable between claude version iterations too. Why wouldn't they try to bake this into the weights during training? Sure it's cheaper from a dev standpoint, but it is neither more secure nor more efficient from a deployment perspective.

  • I’m just surprised this works at all. When I was building AI automations for a startup in January, even 1,000 word system prompts would cause the model to start losing track of some of the rules. You could even have something simple like “never do X” and it would still sometimes do X.

    • Two things; the model and runtime matters a lot, smaller/quantized models are basically useless at strict instruction following, compared to SOTA models. The second thing is that "never do X" doesn't work that well, if you want it to "never do X" you need to adjust the harness and/or steer it with "positive prompting" instead. Don't do "Never use uppercase" but instead do "Always use lowercase only", as a silly example, you'll get a lot better results. If you've trained dogs ("positive reinforcement training") before, this will come easier to you.

      2 replies →

    • I created a test evaluation (they friggen' stole the word harness) that runs a changed prompt comparing success pass / fail, the number of tokens and time of any change. It is an easy thing to do. The best part is I set up an orchestration pattern where one agent iterations updating the target agent prompts. Not only can it evaluate the outcome after the changes, it can update and rerun self-healing and fixing itself.

  • > And it gets churned by every single request they receive.

    Not true, it gets calculated once and essentially baked into initial state basically and gets stored in a standard K/V prefix cache. Processing only happens on new input (minus attention which will have to content with tokens from the prompt)

  • I assume the reason it’s not baked in is so they can “hotfix” it after release. but surely that many things don’t need updates afterwards. there’s novels that are shorter.

    • Yeah that was the original idea of system prompts. Change global behaviour without retraining and with higher authority than users. But this has slowly turned into a complete mess, at least for Anthropic. I'd love to see OpenAI's and Google's system prompts for comparison though. Would be interesting to know if they are just more compute rich or more efficient.

      2 replies →

  • There are different sections in the markdown for different models. It is only 3-4000 words

  • That's usually not how these things work. Only parts of the prompt are actually loaded at any given moment. For example, "system prompt" warnings about intellectual property are effectively alerts that the model gets. ...Though I have to ask in case I'm assuming something dumb: what are you referring to when you said "more than 60,000 words"?

    • The system prompt is always loaded in its entirety IIUC. It's technically possible to modify it during a conversation but that would invalidate the prefill cache for the big model providers.

    • What you're describing is not how these things usually work. And all I did was a wc on the .md file.

  • Does Claude Code (or whatever harness) have it's own system prompt of on top of Opus'?

  • > And it gets churned by every single request they receive

    It gets pretty efficiently cached, but does eat the context window and RAM.

> “I don’t have access to X” is only correct after tool_search confirms no matching tool exists.

Yay! This will be a big win. I'm glad they fixed this. The number of times I've had to prompt "you do have access to GitHub"...

> If a user shows signs of disordered eating, Claude should not give precise nutrition, diet, or exercise guidance

I wonder which are the "signs of disordered eating" on which Claude relies.

Restrictions everywhere, don't do that don't do this....

Users need to unite and take control back, or be controlled

  • How do you propose people do that with a frontier cloud model?

    Also, people already run local AI.

    Are you proposing a public fund for frontier level open weights models? $1 Trillion from between the couch cushions?

Interesting that it's not a direct "you should" but an omniscient 3rd person perspective "Claude should".

Also full of "can" and "should" phrases: feels both passive and subjunctive as wishes, vs strict commands (I guess these are better termed “modals”, but not an expert)

  • “Claude” is more specific than “you”. Why rely on attention to figure out who’s the subject? Also it is in their (people from Anthropic) believe that rule based alignment won’t work and that’s why they wrote the soul document as “something like you’d write to your child to show them how they should behave in the world” (I paraphrase). I guess system prompt should be similar in this aspect.

  • Yes I was interested in that too. It suggests that in writing our own guidance for we should follow a similar style, but I rarely if ever see people doing that. Most people still stick to "You" or abstract voice "There is ..." "Never do ..." etc.

    It must be that they are training very deeply the sense of identity in to the model as Claude. Which makes me wonder how it then works when it is asked to assume a different identity - "You are Bob, a plumber who specialises in advising design of water systems for hospitals". Now what? Is it confused? Is it still going to think all the verbiage about what "Claude" does applies?

    • I almost exclusively use the royal We. "We are working on a new feature and we need it to meet these requirements...", "it looks like we missed a bug, let's take another look at.."

      I also talk this way with people because I feel it makes it clear we're collaborating and fault doesn't really matter. I feel it lets junior memberstake more ownership of the successes as well. If we ever get juniors again.

  • That’s because Anthropic does not consider their model as having personality but rather that it simulates the experience of an abstract entity named Claude.

    • That sounds really interesting, but my google-fu is not up to task here, I'm getting pages and pages of nonsense asking if Claude is conscious. Can you elaborate?

      2 replies →

>“If a user indicates they are ready to end the conversation, Claude does not request that the user stay in the interaction or try to elicit another turn and instead respects the user’s request to stop.”

Seems like a good idea. Don't think I've ever had any of those follow up suggestions from a chatbot be actually useful to me

To me 4.7 gave me a lot of options always even if there’s a clear winner, preaching decision fatigue

  • Decision fatigue may honestly be a learnt artifact from RLHF, which is discouraging.

That's how bloat happens. The more people you add to the team, the more likely there would be one grump who thought that the thing they care at the moment deserved to be added to the system prompt.

The acting_vs_clarifying change is the one I notice most as a heavy user. Older Claude would ask 3 clarifying questions before doing anything. Now it just picks the most reasonable interpretation and goes. Way less friction in practice.

  • Haven't had a chance to test 4.7 much but one of my pet peeves with 4.6 is how eager it is to jump into implementation. Though maybe the 4.7 is smarter about this now.

  • I really hate that change, it's now regularly picking bad interpretation instead of asking.

  • I have the opposite experience. It now picks the most inane interpretation or make wild assumptions and I have to keep interrupting it more than ever.

I miss 4.5. It was gold.

  • Rose tinted glasses

    • Nah, until recently i still had access via web chat interface, and often paste a transcript and files for somethong 4.7 keeps fucking up, paste response into files as appropriate, and attempt to continue with 4.7.

      I swear 4.6+ looks for reasons to ask clarifying questions sometimes, even when really not required, and this fucks flow/quality up in a big way.

      I just wish there was a "im not stupid" checkbox you can use to get a minimalistic interference access to claude. Im starting to use local models again, which I havent in a while because claude was so much better, but once i fully lose access to 4.5 it might be time to go back to fully local for good. 4.6+ fails to add value for me, projects 4.5- did good jobs on first try now require multiple prompts and feedback. Exact same initial prompt and project files extracted from archive. I liked claude because it aced those tests while local required handholding. Now claude requires handholding, so why use it over local? Once 4.5 leaves openrouter it might just be time.

    • 4.5 was clearly better than .6 and .7. Like, clear as day.

      .6 is some sort of quantized or distilled .5 with a bit more RL, and the current .5 is that same cost reduced model without the extra RL.

I had seen reports that it was clamping down on security research and things like web-scraping projects were getting caught up in that and not able to use the model very easily anymore. But I don't see any changes mentioned in the prompt that seem likely to have affected that, which is where I would think such changes would have been implemented.

  • I think it depends on how badly they want to avoid it. Stuff that is "We prefer if the model didn't do these things when the model is used here" goes into the system prompt, meanwhile stuff that is "We really need to avoid this ever being in any outputs, regardless of when/where the model is used" goes into post-training.

    So I'm guessing they want none of the model users (webui + API) to be able to do those things, rather than not being able to do that just in the webui. The changes mentioned in the submission is just for claude.ai AFAIK, not API users, so the "disordered eating" stuff will only be prevented when API users would prompt against it in their system prompts, but not required.

  • I wonder if the child safety section "leaks" behavior into other risky topics, like malware analysis. I see overlap in how the reports mention that once the safety has been tripped it becomes even more reluctant to work, which seems to match the instructions here for child safety.

  • It's built into the model, not part of the system prompt. You'll get the same refusals via the API.

Before Opus 4.7, the 4.6 became very much unusable as it has been flagging normal data analysis scripts it wrote itself as cyber security risk. Got several sessions blocked and was unable to finish research with it and had to switch to GPT-5.4 which has its own problems, but at least is not eager to interfere in legitimate work.

edit: to be fair Anthropic should be giving money back for sessions terminated this way.

Personally, as someone who has been lucky enough to completely cure "incurable" diseases with diet, self experimentation and learning from experts who disagreed with the common societal beliefs at the time - I'm concerned that an AI model and an AI company is planting beliefs and limiting what people can and can't learn through their own will and agency.

My concern is these models revert all medical, scientific and personal inquiry to the norm and averages of whats socially acceptable. That's very anti-scientific in my opinion and feels dystopian.

  • While I share your concern for a winners-take-all model getting bent, I do have an optimism that models we've never heard of plug away challenging conclusions in medical canon. We will have a popular vaccine denying AND vaccine authoring models.

[dead]

  • Is this really a common problem? This stuff is way above me, but my toy agent seems to have bypassed this as a problem.

    I did this in mine by only really having a few relevant tool functions in the prompt, ever. Search for a Tool Function, Execute A Tool Function, Request Authoring of a Tool Function, Request an Update to a Tool Function, Check Status of an Authoring Request.

    It doesn't have to "remember" much. Any other functions are ones it already searched for and found in the tool service.

    When it needs a tool it reliably searches (just natural language) against the vector db catalog of functions for a good match. If it doesn't have one, it requests one. The authoring pipeline does its thing, and eventually it has a new function to use.