Comment by FrasiertheLion
4 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
4 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
No comments yet
Contribute on Hacker News ↗