Comment by davidz
1 year ago
Currently it does: all audio is sent to the model.
However, we are working on turn detection within the framework, so you won't have to send silence to the model when the user isn't talking. It's a fairly straight forward path to cutting down the cost by ~50%.
Working on this for an internal tool - detecting no speech has been a PITA so far. Interested to see how you go with this.
Use the voice activity detector we wrote for Home Assistant. It works very well: https://github.com/rhasspy/pymicro-vad
What if I'm watching TV and use the AI to control it ? It should only react to my voice (a problem I had that forced me to use a wake word).
currently we are using silero VAD to detect speech: https://github.com/livekit/agents/blob/main/livekit-plugins/...
it works well for voice activity; though it doesn't always detect end-of-turn correctly (humans often pause mid-sentence to think). we are working on improving this behavior.
Can I currently put a VAD module in the pipeline and only send audio when there is an active conversation? Feel like just that would solve the problem?