← Back to context

Comment by prodigycorp

10 hours ago

My complaint is about how there's not enough information in the blog post. The title of the post is "How Claude Code works in large codebases". 1521 of 18135 characters is dedicated to expanding on the premise of the title.

My criticism is fair. This is not an engineering blog post, it's purely marketing.

you shoouldn't expect a corpo blog to read like an engineering one

try this instead: https://anthropic.com/engineering

  • Perhaps, but I'm commenting on a blog post called "How Claude Code works in large codebases". That's an interesting question to me. I had hoped there was a more interesting answer.

    • sure.

      replying to the question introduced with your edit at the root: mcp servers tend to inject too many tokens into the context, that might not be relevant for the task at hand. ex: sentry's mcp is useful if you are collecting context for a bug in production, but is hardly useful later when fixing the bug, at that point you'd probably want treesitter, or if you are working on a new feature you might want to pull details from github issues, or jira tickets.

      the consensus seems to land on making the right tools available to the agent, and let it pick which ones to use for the task, this typically means cli tools like git, gh, linters, a cli for your cloud/hosting provider, etc.

      that is where skills come in: they bootstrap some context about a task the operator wants to complete. skills use progressive disclosure — each Read() adds more context, but the agent controls what to load-up and what to ignore, and skills can also come with scripts that facilitate actions relevant to the task.