Comment by martzoukos
10 days ago
I guess that there is no streaming option for sending generated tokens to, say, an LLM service to process the text in real-time.
10 days ago
I guess that there is no streaming option for sending generated tokens to, say, an LLM service to process the text in real-time.
Whisper has the encoder-decoder architecture, so it's hard to run streaming efficiently, though whisper-streaming is a thing.
https://kyutai.org/next/stt is natively streaming STT.
There are many streaming ASR models based on CTC or RNNT. Look for example at sherpa (https://github.com/k2-fsa/sherpa-onnx), which can run streaming ASR, VAD, diarization, and many more.