← Back to context

Comment by nojvek

2 years ago

That’s different. It’s essentially using whisper model for audio to text and that inputs to ChatGPT.

Multimodal would be watching YouTube without captions and asking “how did a certain character know it was raining outside?” Based on rain sound but no image of rain

I don't know if it's related to Gemini, but Bard seems to be able to do this by answering questions like "how many cups of sugar are called for in this video". Not sure if it relies on subtitles or not.

From https://bard.google.com/updates:

> Expanding Bard’s understanding of YouTube videos

> What: We're taking the first steps in Bard's ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.

> Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.