Comment by hhh
4 days ago
Most modern models can dispatch MCP calls in their inference engine, which is how code interpreter etc work in ChatGPT. Basically an mcp server that the execution happens as a call to their ai sandbox and then returns it to the llm to continue generation.
You can do this with gpt-oss using vLLM.
No comments yet
Contribute on Hacker News ↗