This is a good release if they're not too cherry picked!
I say this every time it comes up, and it's not as sexy to work on, but in my experiments voice AI is really held back by transcription, not TTS. Unless that's changed recently.
Pairing speech recognition with a LLM acting as a post-processor is a pretty good approach.
I put together a script a while back which converts any passed audio file (wav, mp3, etc.), normalizes the audio, passes it to ggerganov whisper for transcription, and then forwards to an LLM to clean the text. I've used it with a pretty high rate of success on some of my very old and poorly recorded voice dictation recordings from over a decade ago.
I was going to say, ideally you’d be able to funnel alternates to the LLM, because it would be vastly better equipped to judge what is a reasonable next word than a purely phonetic model.
do you know if any current locally hostable public transcribers are good at diarization? for some tasks having even crude diarization would improve QOL by a huge factor. i was looking at a whisper diarization python package for a bit but it was a bitch to deploy.
Play with the Huggingface demo and I'm guessing this page is a little cherry-picked? In particular I am not getting that kind of emotion in my responses.
A bit on the nose that they used a sample from a professional voice actor (Jennifer English) as the default reference audio file in that huggingface tool.
Sadly they don't publish any training or fine tuning code, so this isn't "open" in the way that Flux or Stable Diffusion are "open".
If you want better "open" models, these all sound better for zero shot:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
Granted, only Seed-VC has training/fine tuning code, but all of these models sound better than Chatterbox. So if you're going to deal with something you can't fine tune and you need a better zero shot fit to your voice, use one of these models instead. (Especially ByteDance's MegaTTS3. ByteDance research runs circles around most TTS research teams except for ElevenLabs. They've got way more money and PhD researchers than the smaller labs, plus a copious amount of training data.)
This API wrapper was initially made to support a particular use case where someone's running, say, Open WebUI or AnythingLLM or some other local LLM frontend.
A lot of these frontends have an option for using OpenAI's TTS API, and some of them allow you to specify the URL for that endpoint, allowing for "drop-in replacements" like this project.
So the speech generation endpoint in the API is designed to fill that niche. However, its usage is pretty basic and there are curl statements in the README for testing your setup.
Anyway, to get to your actual question, let me see if I can whip something up. I'll edit this comment with the command if I can swing it.
In the meantime, can I assume your local text files are actual `.txt` files?
It can definitely run on CPU — but I'm not sure if it can run on a machine without a GPU entirely.
To be honest, it uses a decently large amount of resources. If you had a GPU, you could expect about 4-5 gb memory usage. And given the optimizations for tensors on GPUs, I'm not sure how well things would work "CPU only".
If you try it, let me know. There are some "CPU" Docker builds in the repo you could look at for guidance.
> Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
I thought the point of this sort of watermark was that it was embedded somehow in the model weights, so that it couldn't easily be separated out. If you're going to release an open-source model that adds a watermark as a separate post-processing step, then why bother with the watermark at all?
Possibly a sort of CYA gesture, kinda like how original Stable Diffusion had a content filter IIRC. Could also just be to prevent people from accidentally getting peanut butter in the toothpaste WRT training data, too.
Stable Diffusion or rather Automatic1111 which was initially the UI of choice for SD models had a joke/fake "watermark" setting too which was deliberately doing nothing besides poking fun at people who were thinking that open source projects would really waste time on developing something that could easily be stripped/reverted by the virtue of being open source anyways.
Yeah, there's even a flag to turn it off in the parser `--no-watermark`. I assumed they added it for downstream users pulling it in as a "feature" for their larger product.
1. Any non-OpenAI, non-Google, non-ElevenLabs player is going to have to aggressively open source or they'll become 100% irrelevant. The TTS market leaders are obvious and deeply entrenched, and Resemble, Play(HT), et al. have to aggressively cater to developers by offering up their weights [1].
2. This is CYA for that. Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
[1] This is the right way to do it. Offer source code and weights, offer their own API/fine tuning so developers don't have to deal with the hassle. That's how they win back some market share.
> For now, that means we’re not releasing the training code, and fine-tuning will be something we support through our paid API (https://app.resemble.ai). This helps us pay the bills and keep pushing out models that (hopefully) benefit everyone.
Big bummer here, Resemble. This is not at all open.
For everyone stumbling upon this, there are better "open weights" models than Resemble's Chatterbox TTS:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
These are really good robust models that score higher in openness.
Unfortunately only Seed-VC is fully open. But all of the above still beat Resemble's Chatterbox in zero shot MOS (we tested a lot), especially the mega-OP Chinese models.
(ByteDance slaps with all things AI. Their new secretive video model is better than Veo 3, if you haven't already seen it [2]!)
You can totally ignore this model masquerading as "open". Resemble isn't really being generous at all here, and this is some cheap wool over the eyes trickery. They know they retain all of the cards here, and really - if you're just going to use an API, why not just use ElevenLabs?
Shame on y'all, Resemble. This isn't "open" AI.
The Chinese are going to wipe the floor with TTS. ByteDance released their model in a more open manner than yours, and it sounds way better and generalizes to voices with higher speaker similarity.
Playing with open source is a path forward, but it has to be in good faith. Please do better.
[1] "10/10" open includes: 1. model code, 2. training code, 3. fine tuning code, 4. inference code, 5. raw training data, 6. processed training data, 7. weights, 8. license to outputs, 9. research paper, 10. patents. For something to be a good model, it should have 7/10 or above.
I was going to report how it runs on an old CPU but after fussing with it for about 30 minutes, I can't even get it to run.
Listing the issues in case it helps anyone:
- It doesn't work with Python 3.13, luckily `uv` makes it easy to build a venv with 3.12
- It said numpy 1.26.4 doesn't exist. It definitely does, but `uv pip` was searching for it on the pytorch repo. I passed an `--index-strategy` flag so it would check other repos. This could just be a bug in uv, but when I see "numpy 1.26.4 doesn't exist" and numpy is currently on 2.x, my brain starts to cramp up.
- The `pip install chatterbox-tts` version has a bug in CPU-only mode, so I cloned the Git repo
- The version at the tip of main requires `protobuf-compiler` installed on Debian
- I got a weird CMake error that I can't decipher. I think maybe it's complaining that the Python dev headers are not installed. Why would they be, I'm trying to do inference, not compile Python...
I know anger isn't productive but this is my experience almost any time I'm running Somebody Else's Python Project. Hit an issue, back up, hit another issue, back up, after an hour it still doesn't run.
Maybe this wasn't here when you looked at it, but maybe try Python 3.11?
> We developed and tested Chatterbox on Python 3.11 on Debain 11 OS; the versions of the dependencies are pinned in pyproject.toml to ensure consistency.
Looking at the issues page, it seems it's not well optimized[1] currently.
So out of the box it seems quite beefy consumer hardware will be needed for it to perform reasonably. However it seems like there's significant potential for improvements, though I'm no expert.
Not a silly question, I came here to ask too. Curious to know whether I need a GPU costing 4 digits or if it will run on my 12-year-old thinkpad shitbox. Or something in between.
The emotional exaggeration is interesting, though I don't think I've come across anything quite so versatile and easy to "sculpt" as Elevenlabs and it's ability to generate a voice on the basis of a description of how you want the voice to sound. SparkTTS allows some additional parameters, and it's project on GitHub has placeholders in its code that indicate the model might be refined for more fine grained emotional control. As it is, I've had some success with it and other models by trying to influence prosody and tonality with some heavy handed queues in the text, which can then be used with VC to get closer to desired results, but it's a much more cumbersome process than Eleven.
I've found it excellent with really common accents but with other accents (that are pretty common too) it can easily get stuck picking a different accent.
For instance several Scottish recordings ended up Australian, likewise a fairly mild Yorkshire accent
What is the current state of the art for open source multilingual TTS? I have found Kokoro to be great as English as well, but am still searching for a good solution for French, Japanese, German...
Most of these TTS systems tend to fall apart the longer the text - it's a good idea to just wrap any longform text into separate paragraph segmented batches and then stitch them back together again at the end.
I've also found that if your one-shot sample wave isn't really clean that sometimes Chatterbox produces random unholy whooshing sounds at the end of the generated audio which is an added bonus if you're recording Dante's Inferno.
Regarding your example "On a Google Colab's T4 GPU via Cuda, it takes about 5 minutes to convert "Animal's Farm"", do you know the approximate cost to perform this? I've only used Colab at the free level, so I have no concept of the costs for GPU time.
Audible has already flooded their store with generated audio books. Go to the "Plus Catalog" and it's filled with them. The quality at the moment is complete trash, but I can't imagine it won't get better quickly.
The whole audiobook business will eventually disappear - probably within the decade. There will only be ebooks and on-device AI assistants will read it to you on demand.
I imagine it'll go like this: First pre-generated audiobooks as audio files. Next, online service to generate audio on demand with hyper customizable voices which can be downloaded. Next, a new ebook format which embeds instructions for narration and pronunciation to be read on-device. Finally, AI that's good enough to read it like a storyteller instantly without hints.
Flip side is a solution where I can have a book without an audiobook auto-generated (or use an existing ebook rather than paying audible $30 for their version) and it's "good enough" is a legit improvement. AI generated isn't as good but it's better than nothing. Also, being able to interrupt and ask for more detail/context would be pretty nice. Like I'm reading some Pynchon and I have to stop sometimes and look up the name of a reference to some product nobody knows now, stuff like that.
A year ago for fun I gave a friend a Carl Rogers therapy audiobook, for fun I made an Attenbrough esque reading and it was pretty good over a year ago so should be better now.
Public research and well-intentioned AI companies is all focusing on (white) American English, but that doesn't mean the technology isn't being refined elsewhere. The scamming industry is massive and already goes to depths like slavery to get the job done.
I wouldn't assume you're safe just because the tech in your phone can't speak your language.
the easiest way to defeat phone fraud is to ahead of time decide on a verbal password between family (and close friends, if they're close enough that you'd lend them money).
In a real scenario, they'd know the verbal password and you can authenticate them. Drum it into them that this password will prevent other people from impersonating you in this brave new world of ai voices and even video.
My bet is that the government at some point will have to put some pressure on Walmart and others to stop selling those gift cards completely, doing impersonations is getting too easy and too cheap for there not to be a flood of those scam calls in the near future.
Interesting demo. A few observations, having uploaded a snippet of my own voice, and testing with some of my own text:
- the output had some of the qualities of my voice, but wasn't super similar. (Then again, the fact it could even do this from such a tiny snippet was impressive)
- increasing "CFG/pace" (whatever CFG is) even a little bit often just breaks down into total gibberish
- it was very inconsistent whether it would come out with a kind of British accent or an American one. (My accent is Australian...)
- the emotional exaggeration was interesting, but it seemed to vary a lot exactly what kind of emotion would come out
Does anyone know of an open-source TTS like this that can also encode speech to do voice conversion alongside TTS? i.e. a model that would take speech as input and convert it to one of the pretrained TTS voices.
I love chatterbox, it's my favourite. While the generation speed is quick, i wonder what performance optimization i could try on my 3090 to improve throughput.
It's not quite enough for realtime.
> the emotion intensity control is killer. actual param you can tune per line.
> and the perth watermarking baked into every output, that’s the part most people are sleeping on. survives mp3, editing, even resampling. no plugin, no postprocess.
> also noticed the chatterboxtoolkitui floating in the org, with audiobook mode and batch voice conversion already wired in.
is it a banger???
yes ig so, a full setup ready for indies shipping voicefirst products right now.
It's obviously an AI for playing wargames without having to bother painting all the miniatures, or finding someone with the same weird interest in Balkan engagements during the Napoleonic era.
Anyone know how this compares to Kokoro? I've found Kokoro very useful for generating audiobook but it almost always pronounces words with paired vowels incorrectly. Daisy becomes die-zee, leave becomes lay-ve, etc.
If you're running Kokoro yourself then it might be worth checking your phonemizer / espeak-ng installs in case they are messing up the phonemes for those words (which are then passed on as inputs to Kokoro itself)
There are models that are trained for some kind of (in or out of band) emotiona (or style more general) prompting, but Chatterbox isn’t one of them, so beyond building some kind of system that took in input, processed it into chunks of text to speak and the settings Chatterbox does support (mostly pace and exaggeration) for each chunk, there’s probably no real way to do that with Chatterbox.
It's not on the same level in terms of emotion, but I believe the research https://github.com/CorentinJ/Real-Time-Voice-Cloning was based on is mostly oriented around Chinese first (and then English). It seems to work well enough if you and the voice you're cloning speak the same language though I haven't tested it much.
I’d sign up for a service that calls a pharmacy on my behalf to refill prescriptions. In certain situations, pharmacies will not list prescriptions on their websites, even though they have the prescriptions on file, which forces the customer to call by phone — a frustrating process.
I do feel bad for pharmacists, their job is challenging in so many ways.
Didn't Google already demo that with Google Duplex? It's not available here so I can't test it, but I think that's exactly the kind of thing duplex was designed to do.
Although, from a risk avoidance point of view, I'd understand if Google wanted to stay as far away from having AI deal with medication as possible. Who knows what it'll do when it starts concocting new information while ordering medicine.
I always have issues with TTS models that do not allow you to send large chunks of text. Seems this one does not resolve this either. Always has a limit of like 2-3 sentences.
Whisper large v3 turbo if need support of many languages and want fast enough for deployment even on smartphones (WhisperKit). Can also try lite whisper on HF if need even smaller weights and slightly faster speed.
The voice cloning is okay, not as good as Eleven Labs. There's a Rick (from Rick and Morty) voice example, and the generated audio sounds muffled and low quality. I appreciate that its open source though.
Can you share one that is fast/cheap to run and sounds super realistic? I'm very interested in finding a good TTS and not really concerned about cloning any particular voice (but would like a "distinctive" voice that isn't just a preset one).
Hey ipsum, sorry I could have mentioned that. We spend a ton of effort on open source and sharing our ML knowledge with the community. If you don't want to use our platform, the entire source code and a tutorial is there to run it on your own.
Fun stuff... I don't know how or why, but connecting bluetooth while on this site, made all of the audio clips play at once (Firefox, Linux). Not the best listening experience.
Maybe that irritation could be channelled to contributing into one that supports not only English? Even small steps like tweaking docs, adding missing/extra examples, fielding a few issues in GH (most are usually simple misunderstandings where a quick pointer can easily help a beginner)
For what it's worth, there are also a whole bunch of models that speak Chinese.
So far the US and China are spearheading AI research, so it makes sense that models optimize for languages spoken there. Spanish is an interesting omission on the US part, but that's probably because most AI researchers in the US speak English even if their native tongue is Spanish.
TTS has been around as an initialism long before the current AI wave, the x2y pattern is newer. (You do see it around TTS, even though TTS itself hasn't become T2S; e.g., TTS toolchains often include a g2p—grapheme-to-phoneme—component.)
Demos here: https://resemble-ai.github.io/chatterbox_demopage/ (not mine)
This is a good release if they're not too cherry picked!
I say this every time it comes up, and it's not as sexy to work on, but in my experiments voice AI is really held back by transcription, not TTS. Unless that's changed recently.
FWIW in my recent experience I've found LLMs are very good at reading through the transcription errors
(I've yet to experiment with giving the LLM alternate transcriptions or confidence levels, but I bet they could make good use of that too)
Pairing speech recognition with a LLM acting as a post-processor is a pretty good approach.
I put together a script a while back which converts any passed audio file (wav, mp3, etc.), normalizes the audio, passes it to ggerganov whisper for transcription, and then forwards to an LLM to clean the text. I've used it with a pretty high rate of success on some of my very old and poorly recorded voice dictation recordings from over a decade ago.
Public gist in case anyone finds it useful:
https://gist.github.com/scpedicini/455409fe7656d3cca8959c123...
6 replies →
I was going to say, ideally you’d be able to funnel alternates to the LLM, because it would be vastly better equipped to judge what is a reasonable next word than a purely phonetic model.
2 replies →
do you know if any current locally hostable public transcribers are good at diarization? for some tasks having even crude diarization would improve QOL by a huge factor. i was looking at a whisper diarization python package for a bit but it was a bitch to deploy.
5 replies →
Right you are. I've used speechmatics, they do a decent jon with transcription
1 error every 78 characters?
3 replies →
Play with the Huggingface demo and I'm guessing this page is a little cherry-picked? In particular I am not getting that kind of emotion in my responses.
It is hard to get consistent emotion with this. There are some parameters, and you can go a bit crazy, but it gets weird…
I absolutely ADORE that this has swearing directly in the demo. And from Pulp Fiction, too!
> Any of you fucking pricks move and I'll execute every motherfucking last one of you.
I'm so tired of the boring old "miss daisy" demos.
People in the indie TTS community often use the Navy Seals copypasta [1, 2]. It's refreshing to see Resemble using swear words themselves.
They know how this will be used.
[1] https://en.wikipedia.org/wiki/Copypasta
[2] https://knowyourmeme.com/memes/navy-seal-copypasta
Heh, I always type out the first sentence or two of the Navy Seal copypasta when trying out keyboards.
Can’t you get around that by synthetic data?
[flagged]
You should really disclaim that you're affiliated.
https://news.ycombinator.com/item?id=41866830
You can run it for free here: https://huggingface.co/spaces/ResembleAI/Chatterbox
A bit on the nose that they used a sample from a professional voice actor (Jennifer English) as the default reference audio file in that huggingface tool.
Sadly they don't publish any training or fine tuning code, so this isn't "open" in the way that Flux or Stable Diffusion are "open".
If you want better "open" models, these all sound better for zero shot:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
Granted, only Seed-VC has training/fine tuning code, but all of these models sound better than Chatterbox. So if you're going to deal with something you can't fine tune and you need a better zero shot fit to your voice, use one of these models instead. (Especially ByteDance's MegaTTS3. ByteDance research runs circles around most TTS research teams except for ElevenLabs. They've got way more money and PhD researchers than the smaller labs, plus a copious amount of training data.)
Great tip. I hadn't heard of MegaTTS3.
1 reply →
But whats the inference speed like on these? Can you use them in a realtime interaction with an agent?
Fun to play with.
It makes my Australian accent sound very English though, in a posh RP way.
Very natural sounding, but not at all recreating my accent.
Still, amazingly clear and perfect for most TTS uses where you aren't actually impersonating anyone.
How does it work from the privacy standpoint? Can they use recorded samples for training?
Chatterbox is fantastic.
I created an API wrapper that also makes installation easier (Dockerized as well) https://github.com/travisvn/chatterbox-tts-api/
Best voice cloning option available locally by far, in my experience.
> Chatterbox is fantastic.
> I created an API wrapper that also makes installation easier (Dockerized as well) https://github.com/travisvn/chatterbox-tts-ap
Gave your wrapper a try and, wow, I'm blown away by both Chatterbox TTS and your API wrapper.
Excuse the rudimentary level of what follows.
Was looking for a quick and dirty CLI incantation to specify a local text file instead of the inline `input` object, but couldn't figure it.
Pointers much appreciated.
This API wrapper was initially made to support a particular use case where someone's running, say, Open WebUI or AnythingLLM or some other local LLM frontend.
A lot of these frontends have an option for using OpenAI's TTS API, and some of them allow you to specify the URL for that endpoint, allowing for "drop-in replacements" like this project.
So the speech generation endpoint in the API is designed to fill that niche. However, its usage is pretty basic and there are curl statements in the README for testing your setup.
Anyway, to get to your actual question, let me see if I can whip something up. I'll edit this comment with the command if I can swing it.
In the meantime, can I assume your local text files are actual `.txt` files?
6 replies →
Would this be usable on a PC without a GPU?
It can definitely run on CPU — but I'm not sure if it can run on a machine without a GPU entirely.
To be honest, it uses a decently large amount of resources. If you had a GPU, you could expect about 4-5 gb memory usage. And given the optimizations for tensors on GPUs, I'm not sure how well things would work "CPU only".
If you try it, let me know. There are some "CPU" Docker builds in the repo you could look at for guidance.
If you want free TTS without using local resources, you could try edge-tts https://github.com/travisvn/openai-edge-tts
> Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
Am I misunderstanding, or can you trivially disable the watermark by simply commenting out the call to the apply_watermark function in tts.py? https://github.com/resemble-ai/chatterbox/blob/master/src/ch...
I thought the point of this sort of watermark was that it was embedded somehow in the model weights, so that it couldn't easily be separated out. If you're going to release an open-source model that adds a watermark as a separate post-processing step, then why bother with the watermark at all?
Possibly a sort of CYA gesture, kinda like how original Stable Diffusion had a content filter IIRC. Could also just be to prevent people from accidentally getting peanut butter in the toothpaste WRT training data, too.
Stable Diffusion or rather Automatic1111 which was initially the UI of choice for SD models had a joke/fake "watermark" setting too which was deliberately doing nothing besides poking fun at people who were thinking that open source projects would really waste time on developing something that could easily be stripped/reverted by the virtue of being open source anyways.
Yeah, there's even a flag to turn it off in the parser `--no-watermark`. I assumed they added it for downstream users pulling it in as a "feature" for their larger product.
1. Any non-OpenAI, non-Google, non-ElevenLabs player is going to have to aggressively open source or they'll become 100% irrelevant. The TTS market leaders are obvious and deeply entrenched, and Resemble, Play(HT), et al. have to aggressively cater to developers by offering up their weights [1].
2. This is CYA for that. Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
[1] This is the right way to do it. Offer source code and weights, offer their own API/fine tuning so developers don't have to deal with the hassle. That's how they win back some market share.
[2] https://www.404media.co/wikipedia-pauses-ai-generated-summar...
Nevermind, this is just ~3/10 open, or not really open at all [1]:
https://github.com/resemble-ai/chatterbox/issues/45#issuecom...
> For now, that means we’re not releasing the training code, and fine-tuning will be something we support through our paid API (https://app.resemble.ai). This helps us pay the bills and keep pushing out models that (hopefully) benefit everyone.
Big bummer here, Resemble. This is not at all open.
For everyone stumbling upon this, there are better "open weights" models than Resemble's Chatterbox TTS:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
These are really good robust models that score higher in openness.
Unfortunately only Seed-VC is fully open. But all of the above still beat Resemble's Chatterbox in zero shot MOS (we tested a lot), especially the mega-OP Chinese models.
(ByteDance slaps with all things AI. Their new secretive video model is better than Veo 3, if you haven't already seen it [2]!)
You can totally ignore this model masquerading as "open". Resemble isn't really being generous at all here, and this is some cheap wool over the eyes trickery. They know they retain all of the cards here, and really - if you're just going to use an API, why not just use ElevenLabs?
Shame on y'all, Resemble. This isn't "open" AI.
The Chinese are going to wipe the floor with TTS. ByteDance released their model in a more open manner than yours, and it sounds way better and generalizes to voices with higher speaker similarity.
Playing with open source is a path forward, but it has to be in good faith. Please do better.
[1] "10/10" open includes: 1. model code, 2. training code, 3. fine tuning code, 4. inference code, 5. raw training data, 6. processed training data, 7. weights, 8. license to outputs, 9. research paper, 10. patents. For something to be a good model, it should have 7/10 or above.
[2] https://artificialanalysis.ai/text-to-video/arena?tab=leader...
13 replies →
>Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
it is highly amusing that they still believe they can put that genie back in the bottle with their usual crybully bullshit.
2 replies →
[dead]
Silly question, what’s the lowest spec hardware this will run ?
I was going to report how it runs on an old CPU but after fussing with it for about 30 minutes, I can't even get it to run.
Listing the issues in case it helps anyone:
- It doesn't work with Python 3.13, luckily `uv` makes it easy to build a venv with 3.12
- It said numpy 1.26.4 doesn't exist. It definitely does, but `uv pip` was searching for it on the pytorch repo. I passed an `--index-strategy` flag so it would check other repos. This could just be a bug in uv, but when I see "numpy 1.26.4 doesn't exist" and numpy is currently on 2.x, my brain starts to cramp up.
- The `pip install chatterbox-tts` version has a bug in CPU-only mode, so I cloned the Git repo
- The version at the tip of main requires `protobuf-compiler` installed on Debian
- I got a weird CMake error that I can't decipher. I think maybe it's complaining that the Python dev headers are not installed. Why would they be, I'm trying to do inference, not compile Python...
I know anger isn't productive but this is my experience almost any time I'm running Somebody Else's Python Project. Hit an issue, back up, hit another issue, back up, after an hour it still doesn't run.
We’ll know AGI has arrived when it can figure out Python dependency conflicts
1 reply →
Maybe this wasn't here when you looked at it, but maybe try Python 3.11?
> We developed and tested Chatterbox on Python 3.11 on Debain 11 OS; the versions of the dependencies are pinned in pyproject.toml to ensure consistency.
This GitHub issue says 6-7 GB VRAM: https://github.com/resemble-ai/chatterbox/issues/44
But if the model is any good someone will probably find a way to optimize it to run on even less.
Edit: Got it running on an old Nvidia 2060, I'm seeing ~5 GB VRAM peak.
Looking at the issues page, it seems it's not well optimized[1] currently.
So out of the box it seems quite beefy consumer hardware will be needed for it to perform reasonably. However it seems like there's significant potential for improvements, though I'm no expert.
[1]: https://github.com/resemble-ai/chatterbox/issues/127
It's not a silly question, it's the best question!
If something can be run for free but it's cheaper to rent, it voids the DIY aspect of it.
Not a silly question, I came here to ask too. Curious to know whether I need a GPU costing 4 digits or if it will run on my 12-year-old thinkpad shitbox. Or something in between.
The emotional exaggeration is interesting, though I don't think I've come across anything quite so versatile and easy to "sculpt" as Elevenlabs and it's ability to generate a voice on the basis of a description of how you want the voice to sound. SparkTTS allows some additional parameters, and it's project on GitHub has placeholders in its code that indicate the model might be refined for more fine grained emotional control. As it is, I've had some success with it and other models by trying to influence prosody and tonality with some heavy handed queues in the text, which can then be used with VC to get closer to desired results, but it's a much more cumbersome process than Eleven.
I've found it excellent with really common accents but with other accents (that are pretty common too) it can easily get stuck picking a different accent. For instance several Scottish recordings ended up Australian, likewise a fairly mild Yorkshire accent
I think this says more about Scottish than the model.
> For instance several Scottish recordings ended up Australian
Funnily enough, it made my Australian accent sound very English RP. I was suddenly very posh.
I'm English (RP) and it gave me a Yorkshire accent and Scottish accent in turn.
Like a professional actor!
What is the current state of the art for open source multilingual TTS? I have found Kokoro to be great as English as well, but am still searching for a good solution for French, Japanese, German...
I’ve also been looking for this. OpenVoice2 supports a few languages (5 IIRC), but I haven’t seen anything usable yet
Are these things good enough to narrate a book convincingly or does the voice lose coherence after a few paragraphs being spoken?
Most of these TTS systems tend to fall apart the longer the text - it's a good idea to just wrap any longform text into separate paragraph segmented batches and then stitch them back together again at the end.
I've also found that if your one-shot sample wave isn't really clean that sometimes Chatterbox produces random unholy whooshing sounds at the end of the generated audio which is an added bonus if you're recording Dante's Inferno.
Yes, I've generated an audiobook of a epub using this tool and the result was passable: https://github.com/santinic/audiblez
Regarding your example "On a Google Colab's T4 GPU via Cuda, it takes about 5 minutes to convert "Animal's Farm"", do you know the approximate cost to perform this? I've only used Colab at the free level, so I have no concept of the costs for GPU time.
Once it's good enough Audible will be flooded with AI-narrated books so we'll know soon. (The only question is whether Amazon would disclose it, ofc)
Audible has already flooded their store with generated audio books. Go to the "Plus Catalog" and it's filled with them. The quality at the moment is complete trash, but I can't imagine it won't get better quickly.
The whole audiobook business will eventually disappear - probably within the decade. There will only be ebooks and on-device AI assistants will read it to you on demand.
I imagine it'll go like this: First pre-generated audiobooks as audio files. Next, online service to generate audio on demand with hyper customizable voices which can be downloaded. Next, a new ebook format which embeds instructions for narration and pronunciation to be read on-device. Finally, AI that's good enough to read it like a storyteller instantly without hints.
1 reply →
Flip side is a solution where I can have a book without an audiobook auto-generated (or use an existing ebook rather than paying audible $30 for their version) and it's "good enough" is a legit improvement. AI generated isn't as good but it's better than nothing. Also, being able to interrupt and ask for more detail/context would be pretty nice. Like I'm reading some Pynchon and I have to stop sometimes and look up the name of a reference to some product nobody knows now, stuff like that.
4 replies →
I think you're a bit behind on it: https://www.audible.com/about/newsroom/audible-expands-catal...
its watermarked
1 reply →
I consult a company in the space (not resemble) and I can definitely say it can narrate a book
A year ago for fun I gave a friend a Carl Rogers therapy audiobook, for fun I made an Attenbrough esque reading and it was pretty good over a year ago so should be better now.
Example implementation with sample inference code + voice cloning example:
https://github.com/basetenlabs/truss-examples/tree/main/chat...
Still working on streaming
I just tested it out locally, really excellent quality, the server was easy to set up and well documented.
I'd love to get to real-time generation if that's in the pipeline? Would like to use it along with Home Assistant.
Just a regular reminder to tell your friends and family to be extra skeptical about phone conversations.
It’s becoming much more likely that the friend who desperately needs a gift card to Walmart isn’t the friend at all. :(
My family members speak Spanish with an Argentinean accent. From what I've seen in the space it looks like I'm safe.
Public research and well-intentioned AI companies is all focusing on (white) American English, but that doesn't mean the technology isn't being refined elsewhere. The scamming industry is massive and already goes to depths like slavery to get the job done.
I wouldn't assume you're safe just because the tech in your phone can't speak your language.
In the UK I have been getting AI-fancyTTS calls quite often. I even got one today.
interupting them with "can you make me a poem about x" works reliably. However the latency is a dead give away.
the easiest way to defeat phone fraud is to ahead of time decide on a verbal password between family (and close friends, if they're close enough that you'd lend them money).
In a real scenario, they'd know the verbal password and you can authenticate them. Drum it into them that this password will prevent other people from impersonating you in this brave new world of ai voices and even video.
That is more or less what i did with my parents, but this approach is still susceptible to active mitm attacks.
2 factor authentication through a secure app or a trusted family member is probably also needed though i haven't tackled this part with them yet.
1 reply →
"Oh sorry son did we have a password? I totally forgot."
This is a HN fantasy solution.
3 replies →
My bet is that the government at some point will have to put some pressure on Walmart and others to stop selling those gift cards completely, doing impersonations is getting too easy and too cheap for there not to be a flood of those scam calls in the near future.
Interesting demo. A few observations, having uploaded a snippet of my own voice, and testing with some of my own text:
- the output had some of the qualities of my voice, but wasn't super similar. (Then again, the fact it could even do this from such a tiny snippet was impressive)
- increasing "CFG/pace" (whatever CFG is) even a little bit often just breaks down into total gibberish
- it was very inconsistent whether it would come out with a kind of British accent or an American one. (My accent is Australian...)
- the emotional exaggeration was interesting, but it seemed to vary a lot exactly what kind of emotion would come out
Does anyone know of an open-source TTS like this that can also encode speech to do voice conversion alongside TTS? i.e. a model that would take speech as input and convert it to one of the pretrained TTS voices.
Checkout https://github.com/playht/PlayDiffusion
I love chatterbox, it's my favourite. While the generation speed is quick, i wonder what performance optimization i could try on my 3090 to improve throughput. It's not quite enough for realtime.
> the emotion intensity control is killer. actual param you can tune per line. > and the perth watermarking baked into every output, that’s the part most people are sleeping on. survives mp3, editing, even resampling. no plugin, no postprocess. > also noticed the chatterboxtoolkitui floating in the org, with audiobook mode and batch voice conversion already wired in.
is it a banger??? yes ig so, a full setup ready for indies shipping voicefirst products right now.
They should put the meaning of "TTS" in the readme somewhere, probably near the top. Or their website.
TTS is a very common initialism for Text-to-Speech going back to at least the 90s.
Yeah, it's a very common initialism for people who work in the space, and have some context.
So? Acronym soup is bad communication.
4 replies →
Table Top Simulator.
It's obviously an AI for playing wargames without having to bother painting all the miniatures, or finding someone with the same weird interest in Balkan engagements during the Napoleonic era.
Anyone know how this compares to Kokoro? I've found Kokoro very useful for generating audiobook but it almost always pronounces words with paired vowels incorrectly. Daisy becomes die-zee, leave becomes lay-ve, etc.
If you're running Kokoro yourself then it might be worth checking your phonemizer / espeak-ng installs in case they are messing up the phonemes for those words (which are then passed on as inputs to Kokoro itself)
Chatterbox sounds much more natural. The zero shot voice cloning and exaggeration feature is sick!
Has anyone developed a way to annotate the input to provide emotional context?
In the past I've used different samples from the same speaker for this.
There are models that are trained for some kind of (in or out of band) emotiona (or style more general) prompting, but Chatterbox isn’t one of them, so beyond building some kind of system that took in input, processed it into chunks of text to speak and the settings Chatterbox does support (mostly pace and exaggeration) for each chunk, there’s probably no real way to do that with Chatterbox.
It's only for English sadly
Are there any good options for non-English languages?
It's not on the same level in terms of emotion, but I believe the research https://github.com/CorentinJ/Real-Time-Voice-Cloning was based on is mostly oriented around Chinese first (and then English). It seems to work well enough if you and the voice you're cloning speak the same language though I haven't tested it much.
I’d sign up for a service that calls a pharmacy on my behalf to refill prescriptions. In certain situations, pharmacies will not list prescriptions on their websites, even though they have the prescriptions on file, which forces the customer to call by phone — a frustrating process.
I do feel bad for pharmacists, their job is challenging in so many ways.
Didn't Google already demo that with Google Duplex? It's not available here so I can't test it, but I think that's exactly the kind of thing duplex was designed to do.
Although, from a risk avoidance point of view, I'd understand if Google wanted to stay as far away from having AI deal with medication as possible. Who knows what it'll do when it starts concocting new information while ordering medicine.
I always have issues with TTS models that do not allow you to send large chunks of text. Seems this one does not resolve this either. Always has a limit of like 2-3 sentences.
That's just for their demo.
If you want to run it without size limits, here's an open-source API wrapper that fixes some of the main headaches with the main repo https://github.com/travisvn/chatterbox-tts-api/
How do you set the voice?
On the Huggingface demo, there seems to be no option for it.
It has a female voice. Any way to set it to a male voice?
It's voice cloning. Maybe not available in the demo, but you just provide a different input.
Looks good! What is the difference between the open-source version and the priced version?
Anyone know a good free open source speech to text? Looking for something for my laptop which is running Fedora KDE plasma.
Whisper large v3 turbo if need support of many languages and want fast enough for deployment even on smartphones (WhisperKit). Can also try lite whisper on HF if need even smaller weights and slightly faster speed.
Whisper has been great for me. I have a single-file uv powered python script that creates SRT files or timestamped text files from media stored on the filesystem. https://github.com/danielhoherd/pub-bin/blob/main/whisper-tr...
https://huggingface.co/spaces/nvidia/parakeet-tdt-0.6b-v2
https://huggingface.co/spaces/hf-audio/open_asr_leaderboard
Whisper?
The voice cloning is okay, not as good as Eleven Labs. There's a Rick (from Rick and Morty) voice example, and the generated audio sounds muffled and low quality. I appreciate that its open source though.
How would I install this alongside librechat or ollama using docker?
definitely worse than the new elevenlabs model(v3). that model is really good
I disagree
in my experience, TTS has been a "pick two" situation:
- fast / cheap to run
- can clone voices
- sounds super realistic
from what I can tell, Chatterbox is the first that apparently lets you pick 3! (have not tried it myself yet, this is just what I can deduce)
Can you share one that is fast/cheap to run and sounds super realistic? I'm very interested in finding a good TTS and not really concerned about cloning any particular voice (but would like a "distinctive" voice that isn't just a preset one).
It's also about if you want multi lung support and if wanna run on edge devices. Chatterbox only support English.
Here's an open-source serving implementation: https://lightning.ai/bhimrajyadav/studios/build-a-production...
Also, a deployable model: https://lightning.ai/bhimrajyadav/ai-hub/temp_01jwr0adpqf055...
You failed to mention that this is an ad for the company you work at. Also, the links don't even work without signing up for some shitty service.
Hey ipsum, sorry I could have mentioned that. We spend a ton of effort on open source and sharing our ML knowledge with the community. If you don't want to use our platform, the entire source code and a tutorial is there to run it on your own.
There are only english voices, even in the paid version. Using them in other languages results in an accent.
How does one train a TTS model with an LLM backbone? Practically, how does this work?
you use a neural audio codec to encode audio into codebooks
then you could treat the codebook entries as tokens and treat audio generation as a next token prediction task
you then take the codebook entries generated and run it through the codec’s decoder and yield audio
it works surprisingly well
speech text models (tts model with an llm as backbone) is the current meta
Chatterbox CLI https://pypi.org/project/voice-forge/
How does it perform on multi-lingual tasks?
The readme says it only supports English
Watermarking is easily disabled in the code. I a wondering when they will release model weights with embedded watermarking.
wow! 200mms very good!
Fun stuff... I don't know how or why, but connecting bluetooth while on this site, made all of the audio clips play at once (Firefox, Linux). Not the best listening experience.
What is the latency?
for this, what does it take to support another language?
> Supported Lanugage
> Currenlty only English.
meh
very cherry picked
There’s been surprisingly little advancement in TTS after a rapid leap forward three years ago or so.
There’s eleven labs which is quite good but not incredible and very expensive.
Everything else ……. all the big AI companies …. have TTS systems that are kinda meh.
Everything else in AI has advanced in leaps and bounds, TTS remains deep in the uncanny valley.
another TTS that is only supporting English. This really irritates me
Maybe that irritation could be channelled to contributing into one that supports not only English? Even small steps like tweaking docs, adding missing/extra examples, fielding a few issues in GH (most are usually simple misunderstandings where a quick pointer can easily help a beginner)
For what it's worth, there are also a whole bunch of models that speak Chinese.
So far the US and China are spearheading AI research, so it makes sense that models optimize for languages spoken there. Spanish is an interesting omission on the US part, but that's probably because most AI researchers in the US speak English even if their native tongue is Spanish.
Previously, on Hacker News:
https://news.ycombinator.com/item?id=44145564
Thanks for posting this but it's conventional to only post links to past submissions if they had significant discussion, which none of these did.
I did a quick google search before positing and only found a reference in a comment. But, I searched for the link to the GitHub.
It took me ages to understand what TTS means!
In the spirit of being more constructive...
https://github.com/resemble-ai/chatterbox/pull/156
I don't like how for text to image/video it's T2V I2V, and reference video to video is V2V... Then when we get to text 2 it T all of a sudden.
TTS has been around as an initialism long before the current AI wave, the x2y pattern is newer. (You do see it around TTS, even though TTS itself hasn't become T2S; e.g., TTS toolchains often include a g2p—grapheme-to-phoneme—component.)