Comment by simiones
8 months ago
I read the comments praising these voices as very life like, and went to the page primed to hear very convincing voices. That is not at all what I heard though.
The voices are decent, but the intonation is off on almost every phrase, and there is a very clear robotic-sounding modulation. It's generally very impressive compared to many text-to-speech solutions from a few years ago, but for today, I find it very uninspiring. The AI generated voice you hear all over YouTube shorts is at least as good as most of the samples on this page.
The only part that seemed impressive to me was the English + (Mandarin?) Chinese sample, that one seemed to switch very seamlessly between the two. But this may well be simply because (1) I'm not familiar with any Chinese language, so I couldn't really judge the pronunciation of that, and (2) the different character systems make it extremely clear that the model needs to switch between different languages. Peut-être que cela n'aurait pas été si simple if it had been switching between two languages using the same writing system - I'm particularly curious how it would have read "simple" in the phrase above (I think it should be read with the French pronunication, for example).
And, of course, the singing part is painfully bad, I am very curious why they even included it.
Their comments about the singing and background music are odd. It’s been a while since I’ve done academic research, but something about those comments gave me a strong “we couldn’t figure out how to make background music go away in time for our paper submission, so we’re calling it a feature” vibe as opposed to a “we genuinely like this and think its a differentiator” vibe.
Totally felt the same way! Singing happens spontaneously? What?
They mention that in the FAQ here: https://github.com/microsoft/VibeVoice/tree/main?tab=readme-...
> In fact, we intentionally decided not to denoise our training data because we think it's an interesting feature for BGM to show up at just the right moment. You can think of it as a little easter egg we left for you.
It's not a bug, it's a feature! Okaaaaay
Is there any better model you can point at? I would be interested in having a listen.
There are people – and it does not matter what it's about – that will overstate the progress made (and others will understate it, case in point). Neither should put a damper on progress. This is the best I personally have heard so far, but I certainly might have missed something.
It’s tough to name the best local TTS since they all seem to trade off on quality and features and none of them are as good as ElevenLabs’ closed-source offering.
However Kokoro-82M is an absolute triumph in the small model space. It curbstomps models 10-20x its size in terms of quality while also being runnable on like, a Raspberry Pi. It’s the kind of thing I’m surprised even exists. Its downside is that it isn’t super expressive, but the af_heart voice is extremely clean, and Kokoro is way more reliable than other TTS models: It doesn’t have the common failure mode where you occasionally have a couple extra syllables thrown in because you picked a bad seed.
If you want something that can do convincing voice acting, either pay for ElevenLabs or keep waiting. If you’re trying to build a local AI assistant, Kokoro is perfect, just use that and check the space again in like 6 months to see if something’s beaten it. https://huggingface.co/hexgrad/Kokoro-82M
There's a certain know-nothing feeling I get that makes me worried if we start at the link (which has data showing it > ElevenLabs quality), jump to eh it's actually worse than anything I've heard then last 2 years, and end up at "none are as good as ElevenLabs" - the recommendation and commentary on it, of course, has nothing to do with my feeling, cheers
What is your opinion about F5-TTS or Fish-TTS?
2 replies →
I cobbled together llm-tts to run as many local (and remote) TTs models s I could find and get working.
https://github.com/mlang/llm-tts
Strictly speaking, even music generation fits the usage pattern: text in, audio out.
llm-tts is far from complete, but it makes it relatively "easy" to try a few models in an uniform way.
Not OS or local, but just try ChatGPT Voice Conversation mode. To my ears, it's a generation ahead of these VibeVoice samples.
Probably not even the best ones, but among some recent models I find Dia and Orpheus more natural
- http://dia-tts.com/
- https://github.com/canopyai/Orpheus-TTS
Higgs Audio v2 is currently SOTA in OSS TSS.
Elevenlabs v3 (not local)
i think orpheus and sesame sound better
One of the things this model is actually quite good at is voice cloning. Drop a recorded sample of your voice into the voices folder, and it just works.
bonus usage
I agree. For some reason the female voices are waaay more convincing than the male ones too, which sound barely better than speech synthesis from a decade ago.
Results correlate to investment, and there’s more in synthesizing female coded voices. As for the why female coded voices gets more investments, we all know, only difference is in attitude towards that (the correct answer, of course, is “it sucks”)
We all know? Female voices have better intelligibility? That's my guess anyway.
10 replies →
It's good but not the best free model. I find Chatterbox to be more realistic with no robot-sounding and better (though not perfect) intonation.
Chatterbox sounds great, their demo page is a good introduction: https://resemble-ai.github.io/chatterbox_demopage/
I agree. We switched from elevenlabs to chatterbox (hosted on Resemble.ai) and it is much much cheaper and better.
The English/Mandarin section was VERY impressive. The accents of both the woman speaking English and the man speaking Chinese were spot on. Both sound very convincingly like they are speaking a second language, which anyone here can hear from the Chinese woman speaking English voice. I'd like to add that the foreigner speaking Chinese was also spot on.
This is close to SOTA emotional performance, at least the female voices.
I trust the human scores in the paper. At least my ear aligns with that figure.
With stuff like this coming out in the open, I wonder if ElevenLabs will maintain its huge ARR lead in the field. I really don't see how they can continue to maintain a lead when their offering is getting trounced by open models.
Hmmmm… what is your opinion on the examples showcased here vs the ones on the Dia demo page?
https://yummy-fir-7a4.notion.site/dia
I am not sure why but I find the pacing of the parakeet based models (like Dia) to be much more realistic.
11labs is facing a real competitor
The male Chinese speakers had THICK American accents. Nothing really wrong with the language, but think the stereotype German speaking English. That was kind of strange to me.
I think it's because it was using the American voice for it. Conversely the female voice in the Mandarin conversation spoke English with a Chinese accent.
ElevenLabs has a much more convincing voice model
They also offer an AI Voice Changer that will take a recording and transform it into a different voice but retain the cadence and intonation.
Open source?
it's not oss
The Chinese is good. The Mandarin to English example she sounds native. The English to Mandarin sounds good too but he does have an English speaker's accent, which I think is intentional.
> (1) I'm not familiar with any Chinese language, so I couldn't really judge the pronunciation of that
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect