Comment by simonw
10 months ago
The one feature missing from LLM core for this right now is serving models over an HTTP OpenAI-compatible local server. There's a plugin you can try for that here though: https://github.com/irthomasthomas/llm-model-gateway
No comments yet
Contribute on Hacker News ↗