Comment by donatj

10 days ago

I know nothing about Whisper, is this usable for automated translation?

I own a couple very old and as far as I'm aware never translated Japanese movies. I don't speak Japanese but I'd love to watch them.

A couple years ago I had been negotiating with a guy on Fiver to translate them. At his usual rate-per-minute of footage it would have cost thousands of dollars but I'd negotiated him down to a couple hundred before he presumably got sick of me and ghosted me.

Whisper can indeed transcribe Japanese and translate it to English, though quality varies by dialect and audio clarity. You'll need the "large-v3" model for best results, and you can use ffmpeg's new integration with a command like `ffmpeg -i movie.mp4 -af whisper=model=large-v3:task=translate output.srt`.

  • I wonder how the results of an AI Japanese-audio-to-English-subtitles would compare to a fansub-ed anime. I'm guessing it would be a more literal translation vs. contextual or cultural.

    I found an interesting article about trollsubs, which I guess are fansubs made with a contemptuous flare. https://neemblog.home.blog/2020/08/19/the-lost-art-of-fan-ma...

    Tangent: I'm one of those people who watch movies with closed captions. Anime is difficult because the subtitle track is often the original Japanese-to-English subtitles and not closed captions, so the text does not match the English audio.

    • I do japanese transcription + gemini translations. It’s worse than fansub, but its much much better than nothing. First thing that could struggle is actually the vad, then is special names and places, prompting can help but not always. Finally it’s uniformity (or style). I still feel that I can’t control the punctuation well.

    • I was recently just playing around with Google Cloud ASR as well as smaller Whisper models, and I can say it hasn't gotten to that point: Japanese ASRs/STTs all generate final kanji-kana mixed text, and since kanji:pronunciation is n:n maps, it's non-trivial enough that it currently need hands from human native speakers to fix misheard texts in a lot of cases. LLMs should be theoretically good at this type of tasks, but they're somehow clueless about how Japanese pronunciation works, and they just rubber-stamp inputs as written.

      The conversion process from pronunciation to intended text is not deterministic either, so it probably can't be solved by "simply" generating all-pronunciation outputs. Maybe a multimodal LLM as ASR/STT, or a novel dual input as-spoken+estimated-text validation model could be made? I wouldn't know, though. It seemed like a semi-open question.

In my experience it works ok. The "English" model actually knows a lot of languages and will translate directly to English.

You can also transcribe it to Japanese and use a translator to convert to English. This can sometimes help for more semantically complex dialogue.

For example, using faster-whisper-xxl [1]:

Direct translation:

    faster-whisper-xxl.exe --language English --model large-v2 --ff_vocal_extract mdx_kim2 --vad_method pyannote_v3 --standard <input>

Use Japanese, then translate:

    faster-whisper-xxl.exe --language Japanese --task translate --model large-v2 --ff_vocal_extract mdx_kim2 --vad_method pyannote_v3 --standard <input>

1. https://github.com/Purfview/whisper-standalone-win

My personnal experience trying to transcribe (not translate) was a complete failure. The thing would invent stuff. It would also be completely lost when more than one language is used.

It also doesn't understand contexts so does a lot of errors you see in automatic translations from videos in youtube for example.

  • It's curious how YouTube's is so bad still given the current state of the art; but it has got a lot better in the last 6 months.

Whisper has quite bad issues with hallucination. It will inject sentences that were never said in the audio.

It's decent for classification but poor at transcription.

  • Pre-processing with a vocal extraction model (bs-rofomer or similar) helps a lot with the hallucinations, especially with poor quality sources.

Hey, indeed Whisper can do the transcription of Japanese and even the translation (but only to English). For the best results you need to use the largest model which depending on your hardware might be slow or fast.

Another option is to use something like VideoToTextAI which allows you to transcribe it fast and then translate it into 100+ languages which you can then export the subtitle (SRT) file for