Comment by codemog

9 days ago

As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing with LangChain.

This is one key difference between experienced and inexperienced devs; if something looks like crud, it probably is crud. Don’t follow or do something because it’s popular at the time.

All the code I work on now has an MCP interface so that the LLM can debug more easily. I'd argue it is as important as the UI these days. The amount of time it has saved me is unreal. It might be worth investing a very small amount of your time in it to see if it is a good fit. Even a poor protocol can provide useful functionality.

  • Our workflows must be massively different.

    I code in 8 languages, regularly, for several open source and industry projects.

    I use AI a lot nowadays, but have never ever interacted with an MCP server.

    I have no idea what I'm missing. I am very interested in learning more about what do you use it for.

    • I've managed to ignore MCP servers for a long time as well, but recently I found myself creating one to help the LLM agents with my local language (Papiamentu) in the dialect I want.

      I made a prolog program that knows the valid words and spelling along with sentence conposition rules.

      Via the MCP server a translated text can be verified. If its not faultless the agent enters a feedback loop until it is.

      The nice thing is that it's implemented once and I can use it in opencode and claude without having to explain how to run the prolog program, etc.

    •     > I have no idea what I'm missing.
      

      The questions I'd ask:

          - Do you work in a team context of 10+ engineers?
          - Do you all use different agent harnesses?
          - Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)?
          - Do you need to share common "canonical" docs across multiple repos?
          - Is it your objective to ensure a higher baseline of quality and output across the eng org?
          - Would your workload benefit from telemetry and visibility into tool activation?
      

      If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals.

      4 replies →

    • I can't go into specifics about exactly what I'm doing but I can speak generically:

      I have been working on a system using a Fjall datastore in Rust. I haven't found any tools that directly integrate with Fjall so even getting insight into what data is there, being able to remove it etc is hard so I have used https://github.com/modelcontextprotocol/rust-sdk to create a thin CRUD MCP. The AI can use this to create fixtures, check if things are working how they should or debug things e.g. if a query is returning incorrect results and I tell the AI it can quickly check to see if it is a datastore issue or a query layer issue.

      Another example is I have a simulator that lets me create test entities and exercise my system. The AI with an MCP server is very good at exercising the platform this way. It also lets me interact with it using plain english even when the API surface isn't directly designed for human use: "Create a scenario that lets us exercise the bug we think we have just fixed and prove it is fixed, create other scenarios you think might trigger other bugs or prove our fix is only partial"

      One more example is I have an Overmind style task runner that reads a file, starts up every service in a microservice architecture, can restart them, can see their log output, can check if they can communicate with the other services etc. Not dissimilar to how the AI can use Docker but without Docker to get max performance both during compilation and usage.

      Last example is using off the shelf MCP for VCS servers like Github or Gitlab. It can look at issues, update descriptions, comment, code review. This is very useful for your own projects but even more useful for other peoples: "Use the MCP tool to see if anyone else is encountering similar bugs to what we just encountered"

    • Its very similar to the switch from a text editor + command line, to having an IDE with a debugger.

      the AI gets to do two things:

      - expose hidden state - do interactions with the app, and see before/after/errors

      it gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit.

      if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser

      1 reply →

    • Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc

      2 replies →

  • I've just been discovering this pattern too. It's made a huge difference. Trying to get Claude to remote control an app for testing via the various other means was miserable and unreliable.

    I got it to build an MCP server into the app that supported sending commands to allow Claude to interact with it as if it was a user, including keypresses and grabbing screenshots, and the difference was immediate and really beneficial.

    Visual issues were previously one of the things it would tend to struggle with.

    • How does it compare to my goto: a test suite that uses Playwright?

      > Claude imolement plan.md until all unit and browser tests pass

      1 reply →

  • You are right.

    Although I have been a skeptic of MCPs, it has been an immense help with agents. I do not have an alternative at the moment.

LangChain is not over-engineered; it's not engineered at all. Pure Chaos.

  • Much like how "literally" doesn't literally mean "literally" anymore, "over-engineered" in most cases doesn't mean "too much engineering happened" but "wrong design/abstractions", which of course translates to "designs/abstractions I don't like".

  • I wish job openings for anything LLM related would stop asking for experience with langchain

So let's say you have a rag llm chat api connected to an enterprises document corpus.

Do you not expose an mcp endpoint? Literally every vscode or opencode node gets it for free (a small json snippet in their mcp.json config) If you do auth right

  • Not only editors, but also different runtime contexts like GitHub Agents running in Actions.

    We can plug in MCP almost anywhere with just a small snippet of JSON and because we're serving it from a server, we get very clear telemetry regardless of tooling and envrionment.

    • What are you using for hosting and deploying the MCP servers? I’d like something low friction for enterprise teams to be able to push their MCP definitions as easily as pushing a Git repo (or ideally, as part of a Git repo, kinda like GitHub pages). It’s obviously not sustainable for every team to host their own MCP servers in their own way.

      So what’s the best centralized gateway available today, with telemetry and auth and all the goodness espoused in this blog post?

      3 replies →

What part of MCP do you think is over-engineered?

This is quite literally the opposite opinion I and many others had when first exploring MCP. It's so _obviously_ simple, which is why it gained traction in the first place.

Sniff tests are useful, but they're not wisdom. Most of these stacks are churn wrapped in a repo, so bailing early is usually the right call, yet every so often some ugly little tool sticks because it cuts through one miserable integration problem better than the cleaner options and people keep it around long after the pitch deck evaporates.

The failure mode is turning taste into a religion. If you never touch anything that looks crude on day one, you also miss the occasional weird thing that later becomes boring infra.

> if something looks like crud, it probably is crud

Yes, technically, but you've probably meant cruft here.