← Back to context

Comment by davidz

1 year ago

Currently it does: all audio is sent to the model.

However, we are working on turn detection within the framework, so you won't have to send silence to the model when the user isn't talking. It's a fairly straight forward path to cutting down the cost by ~50%.

Working on this for an internal tool - detecting no speech has been a PITA so far. Interested to see how you go with this.

Can I currently put a VAD module in the pipeline and only send audio when there is an active conversation? Feel like just that would solve the problem?