Comment by AndreSlavescu
5 days ago
We actually deployed working speech to speech inference that builds on top of vLLM as the backbone. The main thing was to support the "Talker" module, which is currently not supported on the qwen3-omni branch for vLLM.
Check it out here: https://models.hathora.dev/model/qwen3-omni
Is your work open source?
At the moment, no unfortunately. However, to my recent knowledge of open source alternatives, the vLLM team published a separate repository for omni models now:
https://github.com/vllm-project/vllm-omni
I have not yet tested out if this does full speech to speech, but this seems like a promising workspace for omni-modal models.
Nice work. Are you working on streaming input/output?
Yeah, that's something we currently support. Feel free to try the platform out! No cost to you for now, you just need a valid email to sign up on the platform.
I tried this out, and it's not passing the record (n.) vs. record (v.) test mentioned elsewhere in this thread. (I can ask it to repeat one, and it often repeats the other.) Am I not enabling the speech-to-speech-ness somehow?
1 reply →