Comment by santiago-pl
6 hours ago
Thanks for raising it! Since vLLM has an OpenAI-compatible API, this should work for now:
docker run --rm -p 8080:8080 \
-e OPENAI_API_KEY="some-vllm-key-if-needed" \
-e OPENAI_BASE_URL="http://host.docker.internal:11434/v1" \
...
enterpilot/gomodel
I'll add a more convenient way to configure it in the coming days.
No comments yet
Contribute on Hacker News ↗