← Back to context

Comment by mythz

7 hours ago

MCP support is available via the fast_mcp extension: https://llmspy.org/docs/mcp/fast_mcp

I use llms .py as a personal assistant and MCP is required to access tools available via MCP.

MCP is a great way to make features available to AI assistants, here's a couple I've created after enabling MCP support:

- https://llmspy.org/docs/mcp/gemini_gen_mcp - Give AI Agents ability to generate Nano Banana Images or generate TTS audio

- https://llmspy.org/docs/mcp/omarchy_mcp - Manage Omarchy Desktop Themes with natural language

I will say there's a noticable delay in using MCP vs tools, where I ended up porting Anthropic's node filesystem MCP to Python [1] to speed up common AI Assistant tasks, so their not ideal for frequent access of small tasks, but are great for long running tasks like Image/Audio generation.

[1] https://github.com/ServiceStack/llms/blob/main/llms/extensio...

Does the MCP implementation make it easy to swap out the underlying image provider? I've found Gemini is still a bit hit or miss for actual print-on-demand products compared to Midjourney. Since MJ still doesn't have a real API I've been routing requests to Flux via Replicate for higher quality automated flows. Curious if I could plug that in here without too much friction.

  • MCP allows AI Models that doesn't support Image generation the ability to generate images/audio via tool calling.

    But you can just select the Image Generation model you prefer to use directly [1]. Currently supports Google, Open AI, OpenRouter, Chutes, Z.ai and Nvidia.

    I tried Replicate's MCP, but it looks like everything but generate images which I didn't understand, surely image generation would be its most sought after feature?

    [1] https://llmspy.org/docs/v3#image-generation-support