For those too impatient to read the details, check out the "Hear for yourself" examples toward the bottom of the page. They're reproducing decent sounding speech at 1.6 kbps.
1.6 kbps is nuts! I like to re-encode audio books or podcasts in Opus at 32 kbps and I consider that stingy. The fact that speech is even comprehensible at 1.6 kbps is impressive. As the article explains, their technique is analogous to speech-to-text, then text-to-speech.
The original recordings are a little stiff, and the encoded speech is a little more stiff. It isn't perfect, but it's decent. It'll be interesting to hear this technique applied to normal conversation. If regular speech holds up as well as their samples, it should be perfectly adequate for conversational speech. At 1.6 kbps, which is absurd.
Also, I wonder how well this technique could be applied to music. My guess is that it won't do justice to great musicians ... but it might be good enough simple pop tunes.
Actually, this won't work at all for music because it makes fundamental assumptions that the signal is speech. For normal conversations, it should work, though for now the models are not yet as robust as I'd like (in case of noise and reverberation). That's next on the list of things to improve.
Here we go! This is the first minute or so of Penny Lane by The Beatles converted down to a 10KB .bin and then back to a .wav: http://no.gd/pennylane.wav .. unsurprisingly the vocals remain recognizable, but the music barely at all.
I tried it with music and the results were spooky. Very ethereal and ghostly. It was only with some classical music though, I might have to do a pop song next and share the results!
I’ve done experiments with Opus that produce intelligible (but ugly) speech at 2.3 kbit/s. It involves downsampling the Opus stream at the packet level—e.g. transmit only one out of every three packets. It was surprisingly easy. Nothing as sophisticated as what’s going on here.
Also based on work by Xiph. Possibly using the same LPC used here.
I use --vbr --bitrate 16 and it feels indistinguishable from the original for podcasts. As opusenc takes only wav for input and does not use multiple cores, I had to write scripts for parallel re-encoding of stuff.
There's some really exciting progress that could be made in this space. The page mentions that they could use it as a post-processing filter for Opus for better decoding to avoid changing the bitstream. It could also be useful as a way to accommodate for packet loss and recover "just enough" to avoid interrupting the conversation.
* encoding audio through a neural network for network transmission
Only if Motorola gets out of the way and supports modern codecs and standards. Current public safety radio networks are using ancient TDMA tech that Motorola has cobbled together, along with audio codecs that shred voice quality. The only good part is the durability of the pricey radio, some are even intrinsically safe.
Just to put this in perspective, a traditional phone line encodes 56 Kb/s of data, which was believed to be the size channel to send the human voice with a reasonable quality. They are doing it in 1.6 Kb/s!
There are band filters and such on legacy, fully analog systems.
G.711 (which is standard now for non-cellular call audio) is a step down, but Opus at 16Kbps sounds better to me than a classic, full analog system due to the lack of band cutoff & smarter encoding.
For those interested in low bandwidth audio codecs, take a look at the voice codec used for Iridium handheld satellite phones, which was finalized in about 1998. Fully twenty plus years ago.
It doesn't sound the best, but consider the processing power constraints it was designed with...
Iridium appears to be using a vocoder called AMBE. Its quality is similar to the one of the MELP codec from the demo and it also runs at 2.4 kb/s. LPCNet at 1.6 is a significant improvement over that -- if you can afford the complexity of course (at least it'll work on a phone now).
Based on my previous experience with Iridium I believe it actually operates at a data rate up to about 3 to 3.2 kb/s. 2400 bps of it is actual usable voice payload, the remaining 600 bps is FEC.
Iridium data (not the new next-generation network) service is around the same speed, it's 2400 bps + whatever compression v42bis can gain you. For plain text and stuff it can be a bit faster, something that's already incompressible by v42bis will be right around 2400 baud.
The examples sound excellent. Top (equal or better) of any text-to-speech synthesizer I've ever heard. I would love to start using it for audio books and for VoIP to save space and traffic as soon as possible. And a Linux-native text-to-speech synthesizer capable of producing speech of this quality is a thing I dream of (now the only option I know is booting to Windows and using Ivona voices)
This is really amazing work. Nice to see LPC pushed to its limits, and I can't wait to see what's next for speech compression. Here's hoping the mobile companies pick up on something similar soon.
Similar as in the same approach, or as in "apply neural networks to all the things"? Because if it's the former, this approach was very specifically tailored to human speech, taking into account how much it can compress/interpolate qualities like pitch and the spectral envelope. That's far too specific to apply to video.
As for the latter, you'd have to perhaps feed Google Scholar the right incantations or ask someone with knowledge. As far as I know, video codecs already have a huge bag of tricks they use (for example the B-frames borrowed in this post). Even then, the key points in this codec were that firstly it's meant for use at very low bitrates, where existing codecs break down, and then secondly it's a vocoder, so it's converting audio to an intermediate form and resynthesizing it. That kind of lossiness is acceptable for audio, but I'm not sure how it would work acceptably for video.
Keep in mind that the very first CELP speech codec (in 1984) used to take 90 seconds to encode just 1 second of speech... on a Cray supercomputer. Ten years later, people had that running in their cell phones. It's not just that hardware keeps getting faster, but algorithms are also getting more efficient. LPCNet is already 1/100 the complexity of the original WaveNet (which is just 2 years old) and I'm pretty sure it's still far from optimal.
This is roughly >100x computation for 2x improvement, which might sound great, except we are already talking single digit Kbits here, hence diminishing returns.
Opus is awesome and covers a previously unmatched spectrum of use cases... but that isn't everything.
Opus isn't good enough to be a replacement for AMBE for use over radio. Opus doesn't make it easier to make very high quality speech synthesis, etc.
Opus loss robustness could be much better using tools from this toolbox-- and we're a long way from not wanting better performance in the face of packet loss.
Opus is still improving, v1.1 to v1.2 then onto v1.3 (current in FFMpeg) saw huge reductions in compute for encoding, and the minimum bitrate for stereo wideband fall year after year.
The limiting factor for Opus's penetration has been compute, FEC is still rarely supported on VOIP deskphones due to this, ditto for handling multiple Opus calls at once.
I don't know much about phone tech, are the basebands really doing math or just instrumenting? My assumption would be that there is just some sensor writing to a buffer at a high frequency but that whatever processes that buffer operates at a lower frequency.
Not really, no. Especially not if this is implemented in a specialized accelerator. A GFLOP is not that much there. Also, like most other neural network algorithms, this could also be done in fixed point, thereby further reducing the computational cost.
There are technologies to compress deep networks by pruning weak connections. I don't believe the author is using this, so it's likely the computational cost could be reduced by a factor of 10. It could also be that simple tweaks to the NN architecture also work (was the author aiming for using a network as small as possible to begin with?).
Actually, what's in the demo already includes pruning (through sparse matrices) and indeed, it does keep just 1/10 of the weights as non-zero. In practice it's not quite a 10x speedup because the network has to be a bit bigger to get the same performance. It's still a pretty significant improvement. Of course, the weights are pruned by 16x1 blocks to avoid hurting vectorization (see the first LPCNet paper and the WaveRNN paper for details).
Local text to speech quality blows this out of the water with a much smaller bandwidth footprint than 1.6kb/s. To me this sounds sort of like "I figured out how to eat a pinecone in less than three seconds". Impressive but not useful.
That was a fairly ignorant comment because the whole idea behind a speech codec is to compress and reproduce speech in a manner that allows one to at least recognize who's speaking.
But I wonder if there isn't a gem in there somewhere. The essential expressive characteristics of a person's voice change much more slowly than the frame rate of any codec, and predictive coding alone doesn't cover all of the possibilities. There are also not that many unique human voices in the world. If you had several thousand people read a given passage and asked me to listen to them and tell me which voice belongs to a close friend or relative, I doubt I could do it.
So, if a codec could adequately pigeonhole the speaker's inflection, accent, timbre, pacing, and other characteristics and send that information only once per transmission, or whenever it actually changes, then a text-to-speech solution with appropriate metadata describing how to 'render' the voice might work really well.
Put another way, I doubt the nerves between your brain and your larynx carry anything close to 1.6 kbps of bandwidth. Near-optimal compression might be achieved by modeling the larynx accurately alongside the actual nerve signals that drive it, rather than by trying to represent both with a traditional frame-based predictive model.
This idea is the basis of an interesting plot point in Vernor Vinge's sci-fi novel A Fire Upon the Deep.
The book is set in a universe where long-distance, faster-than-light communication is possible, but extremely bandwidth-constrained. However, localized computational power is many orders of magnitude beyond what we have today. As a result, much of the communication on the "Known Net" is text (heavily inspired by Usenet) but you can also send an "evocation" of audio and/or video, which is extremely heavily compressed and relies on an intelligent system at the receiving end to reconstruct and extrapolate all the information that was stripped out.
The downside, of course, is that it can become difficult to tell which nuances were originally present, and which are confabulated.
The ceptrum that takes up most of the bits (or the LSPs in other codecs) is actually a model of the larynx -- another reason why it doesn't do well on music. Because of the accuracy needed to exactly represent the filter that the larynx makes, plus the fact that it can more relatively quickly, there's indeed a significant number of bits involved here.
The bitrate could definitely be reduced (possibly by 50%+) by using packets of 1 seconds along with entropy coding, but the resulting codec would not be very useful for voice communication. You want packets short enough to get decent latency and if you use RF, then VBR makes things a lot more complicated (and less robust).
For those too impatient to read the details, check out the "Hear for yourself" examples toward the bottom of the page. They're reproducing decent sounding speech at 1.6 kbps.
1.6 kbps is nuts! I like to re-encode audio books or podcasts in Opus at 32 kbps and I consider that stingy. The fact that speech is even comprehensible at 1.6 kbps is impressive. As the article explains, their technique is analogous to speech-to-text, then text-to-speech.
The original recordings are a little stiff, and the encoded speech is a little more stiff. It isn't perfect, but it's decent. It'll be interesting to hear this technique applied to normal conversation. If regular speech holds up as well as their samples, it should be perfectly adequate for conversational speech. At 1.6 kbps, which is absurd.
Also, I wonder how well this technique could be applied to music. My guess is that it won't do justice to great musicians ... but it might be good enough simple pop tunes.
Actually, this won't work at all for music because it makes fundamental assumptions that the signal is speech. For normal conversations, it should work, though for now the models are not yet as robust as I'd like (in case of noise and reverberation). That's next on the list of things to improve.
Here we go! This is the first minute or so of Penny Lane by The Beatles converted down to a 10KB .bin and then back to a .wav: http://no.gd/pennylane.wav .. unsurprisingly the vocals remain recognizable, but the music barely at all.
17 replies →
I tried it with music and the results were spooky. Very ethereal and ghostly. It was only with some classical music though, I might have to do a pop song next and share the results!
I’ve done experiments with Opus that produce intelligible (but ugly) speech at 2.3 kbit/s. It involves downsampling the Opus stream at the packet level—e.g. transmit only one out of every three packets. It was surprisingly easy. Nothing as sophisticated as what’s going on here.
Also based on work by Xiph. Possibly using the same LPC used here.
For comparison, adaptive GSM encodings, which are in use for cellphones today, are also in the single-digit kbps.
https://en.wikipedia.org/wiki/Adaptive_Multi-Rate_audio_code...
I use --vbr --bitrate 16 and it feels indistinguishable from the original for podcasts. As opusenc takes only wav for input and does not use multiple cores, I had to write scripts for parallel re-encoding of stuff.
I like to use Makefiles for parallel encoding.
Make -j4 or whatever. There are a few other ways to do this (e.g. xargs).
I've been waiting for someone to do this* with audio and/or video. Amazing work.
Also worth reading this related link: https://www.rowetel.com/?p=6639
There's some really exciting progress that could be made in this space. The page mentions that they could use it as a post-processing filter for Opus for better decoding to avoid changing the bitstream. It could also be useful as a way to accommodate for packet loss and recover "just enough" to avoid interrupting the conversation.
* encoding audio through a neural network for network transmission
For comparison, your standard police/fire/medical digital radio in the US sends voice at 4.4Kb/s. So this is a approximately a third of that.
and uses codec with up to couple hundred MIPS (AMBE 1/+2) computation cost, not GFLOPS
So maybe this line of work will mean more spectrum available in the future.
Only if Motorola gets out of the way and supports modern codecs and standards. Current public safety radio networks are using ancient TDMA tech that Motorola has cobbled together, along with audio codecs that shred voice quality. The only good part is the durability of the pricey radio, some are even intrinsically safe.
1 reply →
Just to put this in perspective, a traditional phone line encodes 56 Kb/s of data, which was believed to be the size channel to send the human voice with a reasonable quality. They are doing it in 1.6 Kb/s!
Aren't "traditional" aka POTS lines analog, and therefore not doing any encoding whatsoever?
There are band filters and such on legacy, fully analog systems.
G.711 (which is standard now for non-cellular call audio) is a step down, but Opus at 16Kbps sounds better to me than a classic, full analog system due to the lack of band cutoff & smarter encoding.
For those interested in low bandwidth audio codecs, take a look at the voice codec used for Iridium handheld satellite phones, which was finalized in about 1998. Fully twenty plus years ago.
It doesn't sound the best, but consider the processing power constraints it was designed with...
https://en.wikipedia.org/wiki/Iridium_Communications#Voice_a...
Iridium appears to be using a vocoder called AMBE. Its quality is similar to the one of the MELP codec from the demo and it also runs at 2.4 kb/s. LPCNet at 1.6 is a significant improvement over that -- if you can afford the complexity of course (at least it'll work on a phone now).
Based on my previous experience with Iridium I believe it actually operates at a data rate up to about 3 to 3.2 kb/s. 2400 bps of it is actual usable voice payload, the remaining 600 bps is FEC.
Iridium data (not the new next-generation network) service is around the same speed, it's 2400 bps + whatever compression v42bis can gain you. For plain text and stuff it can be a bit faster, something that's already incompressible by v42bis will be right around 2400 baud.
The examples sound excellent. Top (equal or better) of any text-to-speech synthesizer I've ever heard. I would love to start using it for audio books and for VoIP to save space and traffic as soon as possible. And a Linux-native text-to-speech synthesizer capable of producing speech of this quality is a thing I dream of (now the only option I know is booting to Windows and using Ivona voices)
Mozilla Deepspeech (Speech tp Text) and Mozilla TTS are both useful at this point: https://research.mozilla.org/machine-learning/
This is really amazing work. Nice to see LPC pushed to its limits, and I can't wait to see what's next for speech compression. Here's hoping the mobile companies pick up on something similar soon.
Voice quality at that bit rate is absolutely astounding to me. This is one of my favorite things – to see such elegantly applied research.
Would be really cool when this is in FreeDV and the HF bands benefit from this!
My dream is for telecom monopolies to get broken up by technologies like this.
might be fun to try to port this to the dadamachines doppler... :)
OK, so... Would it be possible to do something similar for video?
Similar as in the same approach, or as in "apply neural networks to all the things"? Because if it's the former, this approach was very specifically tailored to human speech, taking into account how much it can compress/interpolate qualities like pitch and the spectral envelope. That's far too specific to apply to video.
As for the latter, you'd have to perhaps feed Google Scholar the right incantations or ask someone with knowledge. As far as I know, video codecs already have a huge bag of tricks they use (for example the B-frames borrowed in this post). Even then, the key points in this codec were that firstly it's meant for use at very low bitrates, where existing codecs break down, and then secondly it's a vocoder, so it's converting audio to an intermediate form and resynthesizing it. That kind of lossiness is acceptable for audio, but I'm not sure how it would work acceptably for video.
I should have been more specific. I meant that instead of compressing video to minimise pixel difference, minimise feature difference instead.
Really thought this was gonna be human-neural-activity-to-speech and I feel like a doofus.
3 Gflops, we are deep beyond diminishing returns here. Opus seems good enough.
Keep in mind that the very first CELP speech codec (in 1984) used to take 90 seconds to encode just 1 second of speech... on a Cray supercomputer. Ten years later, people had that running in their cell phones. It's not just that hardware keeps getting faster, but algorithms are also getting more efficient. LPCNet is already 1/100 the complexity of the original WaveNet (which is just 2 years old) and I'm pretty sure it's still far from optimal.
This is roughly >100x computation for 2x improvement, which might sound great, except we are already talking single digit Kbits here, hence diminishing returns.
Opus is awesome and covers a previously unmatched spectrum of use cases... but that isn't everything.
Opus isn't good enough to be a replacement for AMBE for use over radio. Opus doesn't make it easier to make very high quality speech synthesis, etc.
Opus loss robustness could be much better using tools from this toolbox-- and we're a long way from not wanting better performance in the face of packet loss.
This is roughly 2x improvement over AMBE+2, except AMBE peaks at maybe couple hundred MIPS, and there are better less computationally intensive alternative, like 20-70 MIPS https://dspini.com/vocoders/lowrate/twelp-lowrate/twelp2400
Opus is still improving, v1.1 to v1.2 then onto v1.3 (current in FFMpeg) saw huge reductions in compute for encoding, and the minimum bitrate for stereo wideband fall year after year.
The limiting factor for Opus's penetration has been compute, FEC is still rarely supported on VOIP deskphones due to this, ditto for handling multiple Opus calls at once.
3 GFLOP/sec sounds like a lot but it's considerably less math than the radio DSPs inside any modern phone's baseband is doing during a phone call.
I don't know much about phone tech, are the basebands really doing math or just instrumenting? My assumption would be that there is just some sensor writing to a buffer at a high frequency but that whatever processes that buffer operates at a lower frequency.
2 replies →
Not really, no. Especially not if this is implemented in a specialized accelerator. A GFLOP is not that much there. Also, like most other neural network algorithms, this could also be done in fixed point, thereby further reducing the computational cost.
I wonder if a particular network can be implemented more economically if we only run it in "transformation" mode, without the need do train it.
There are technologies to compress deep networks by pruning weak connections. I don't believe the author is using this, so it's likely the computational cost could be reduced by a factor of 10. It could also be that simple tweaks to the NN architecture also work (was the author aiming for using a network as small as possible to begin with?).
Actually, what's in the demo already includes pruning (through sparse matrices) and indeed, it does keep just 1/10 of the weights as non-zero. In practice it's not quite a 10x speedup because the network has to be a bit bigger to get the same performance. It's still a pretty significant improvement. Of course, the weights are pruned by 16x1 blocks to avoid hurting vectorization (see the first LPCNet paper and the WaveRNN paper for details).
Local text to speech quality blows this out of the water with a much smaller bandwidth footprint than 1.6kb/s. To me this sounds sort of like "I figured out how to eat a pinecone in less than three seconds". Impressive but not useful.
That was a fairly ignorant comment because the whole idea behind a speech codec is to compress and reproduce speech in a manner that allows one to at least recognize who's speaking.
But I wonder if there isn't a gem in there somewhere. The essential expressive characteristics of a person's voice change much more slowly than the frame rate of any codec, and predictive coding alone doesn't cover all of the possibilities. There are also not that many unique human voices in the world. If you had several thousand people read a given passage and asked me to listen to them and tell me which voice belongs to a close friend or relative, I doubt I could do it.
So, if a codec could adequately pigeonhole the speaker's inflection, accent, timbre, pacing, and other characteristics and send that information only once per transmission, or whenever it actually changes, then a text-to-speech solution with appropriate metadata describing how to 'render' the voice might work really well.
Put another way, I doubt the nerves between your brain and your larynx carry anything close to 1.6 kbps of bandwidth. Near-optimal compression might be achieved by modeling the larynx accurately alongside the actual nerve signals that drive it, rather than by trying to represent both with a traditional frame-based predictive model.
This idea is the basis of an interesting plot point in Vernor Vinge's sci-fi novel A Fire Upon the Deep.
The book is set in a universe where long-distance, faster-than-light communication is possible, but extremely bandwidth-constrained. However, localized computational power is many orders of magnitude beyond what we have today. As a result, much of the communication on the "Known Net" is text (heavily inspired by Usenet) but you can also send an "evocation" of audio and/or video, which is extremely heavily compressed and relies on an intelligent system at the receiving end to reconstruct and extrapolate all the information that was stripped out.
The downside, of course, is that it can become difficult to tell which nuances were originally present, and which are confabulated.
Another example of the dangers of compression algorithms that are too clever for their own good: https://www.theregister.co.uk/2013/08/06/xerox_copier_flaw_m...
1 reply →
The ceptrum that takes up most of the bits (or the LSPs in other codecs) is actually a model of the larynx -- another reason why it doesn't do well on music. Because of the accuracy needed to exactly represent the filter that the larynx makes, plus the fact that it can more relatively quickly, there's indeed a significant number of bits involved here.
The bitrate could definitely be reduced (possibly by 50%+) by using packets of 1 seconds along with entropy coding, but the resulting codec would not be very useful for voice communication. You want packets short enough to get decent latency and if you use RF, then VBR makes things a lot more complicated (and less robust).
4 replies →
Since this is implemented by a neural network, maybe all you have to do to obtain that is train it with different voices.
You really can't think of any uses for being able to push 40 simultaneous speech connections through a single 64Kbps voice channel?
The difference is that it uses the voice of the original person. And STT is rather erroneous. The WER is in the single percent range :).