Comment by AndreSlavescu
21 hours ago
From my understanding of the above problem, this would be something to do with the model weights. Have you tested this with the transformers inference baseline that is shown on huggingface?
In our deployment, we do not actually tune the model in any way, this is all just using the base instruct model provided on huggingface:
https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct
And with the potential concern around conversation turns, our platform is designed for one-off record -> response flows. But via the API, you can build your own conversation agent to use the model.
No comments yet
Contribute on Hacker News ↗