← Back to context

Comment by ianm218

13 hours ago

> Yeah this is just straight up nonsense.

I was just sharing my experience I'm not sure what you mean. Just n=1 data point.

From first principles I 100% agree and yes I was using a CLI tool I made with typer that has super clear --help + had documentation that was supposed to guide multi step workflows. I just got much better performance when I tried MCP. I asked Claude Code to explain the diff:

> why does our MCP onbaroding get better performance than the using objapi in order to make these pipelines? Like I can see the performance is better but it doesn't intuitively make sense to me why an mcp does better than an API for the "create a pipeline" workflow

It's not MCP-the-protocol vs API-the-protocol. They hit the same backend. The difference is who the interface was designed for.

  The CLI is a human interface that Claude happens to use. Every objapi pb call means:
  - Spawning a new Python process (imports, config load, HTTP setup)
  - Constructing a shell command string (escaping SQL in shell args is brutal)
  - Parsing Rich-formatted table output back into structured data
  - Running 5-10 separate commands to piece together the current state (conn list, sync list, schema classes, etc.)

  The MCP server is an LLM interface by design. The wins are specific:

  1. onboard://workspace-state resource — one call gives Claude the full picture: connections, syncs, object classes, relations, what exists, what's missing. With the CLI, Claude
  runs a half-dozen commands and mentally joins the output.
  2. Bundled operations — explore_connection returns tables AND their columns, PKs, FKs in one response. The CLI equivalent is conn tables → pick table → conn preview for each. Fewer
   round-trips = fewer places for the LLM to lose the thread.
  3. Structured in, structured out — MCP tools take JSON params, return JSON. No shell escaping, no parsing human-formatted tables. When Claude needs to pass a SQL string with quotes
   and newlines through objapi pb node add sql --sql "...", things break in creative ways.
  4. Tool descriptions as documentation — the MCP tool descriptions are written to teach an LLM the workflow. The CLI --help is written for humans who already know the concepts.
  5. Persistent connection — the MCP server keeps one ObjectsClient alive across all calls. The CLI boots a new Python process per command.

  So the answer is: same API underneath, but the MCP server eliminates the shell-string-parsing impedance mismatch and gives Claude the right abstractions (fewer, chunkier operations
   with full context) instead of making it pretend to be a human at a terminal.

For context I was working on a visual data pipeline builder and was giving it the same API that is used in the frontend - it was doing very poorly with the API.