← Back to context

Comment by valleyer

5 days ago

I tried this out, and it's not passing the record (n.) vs. record (v.) test mentioned elsewhere in this thread. (I can ask it to repeat one, and it often repeats the other.) Am I not enabling the speech-to-speech-ness somehow?

From my understanding of the above problem, this would be something to do with the model weights. Have you tested this with the transformers inference baseline that is shown on huggingface?

In our deployment, we do not actually tune the model in any way, this is all just using the base instruct model provided on huggingface:

https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct

And with the potential concern around conversation turns, our platform is designed for one-off record -> response flows. But via the API, you can build your own conversation agent to use the model.