← Back to context

Comment by woeirua

7 days ago

Ok, but there are still many environments where an LLM will not have access to a CLI. In those situations, skills calling CLI tools to hook into APIs are DOA.

What are the advantages of using an environment that doesn't have access to a CLI, only having to run/maintain your own server, or pay someone else to maintain that server, so AI has access to tools? Can't you just use AI in the said server?

  • The advantage is that I can have it in my pocket.

    • gateway agent is a thing for many months now (and I don't mean openclaw, that's grown into a disaster security wise). There are good, minimal gateway agents today that can fit in your pocket.

  • Obvious example is a corporate chatbot (if it's using tools, probably for internal use). Non-technical users might be accessing it from a phone or locked-down corporate device, and you probably don't want to run a CLI in a sandbox somewhere for every session, so you'd like the LLM to interface with some kind of API instead.

    Although, I think MCP is not really appropriate for this either. (And frankly I don't think chatbots make for good UX, but management sure likes them.)

    • Why are they not calling APIs directly with strictly defined inputs and outputs like every other internal application?

      The story for MCP just makes no sense, especially in an enterprise.

      15 replies →

    • > and you probably don't want to run a CLI in a sandbox somewhere for every session

      You absolutely DO want to run everything related to LLMs in a sandbox, that's basic hygiene

      1 reply →

idk, just have a standard internet request tool that skills can describe endpoints to. like you could mock `curl` even for the same CLI feel