Comment by FrasiertheLion
6 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
6 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
No comments yet
Contribute on Hacker News ↗