← Back to context

Comment by valine

2 years ago

It’s not live, but it’s in the realm of outputs I would expect from a GPT trained on video embeddings.

Implying they’ve solved single token latency, however, is very distasteful.

OP says that Gemini had still images as input, not video - and the dev blog post shows it was instructed to reply to each input in relevant terms. Needless to say, that's quite different from what's implied in the demo, and at least theoretically is already within GPT's abilities.