Comment by FrasiertheLion
8 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
8 months ago
We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.
No comments yet
Contribute on Hacker News ↗