← Back to context

Comment by michaelt

2 years ago

I disagree.

One of the major issues in LLMs is the economics; a lot of people suspect ChatGPT loses money on every user, or at least every heavy user, because they've got a big model and A100 GPUs are expensive and in short supply.

They're kinda reluctant to have customers, with API rate limits galore, and I've heard people claiming ChatGPT has lost the performance crown having switched to a cheaper-to-run model.

If google had a model that operated on video in realtime, that would imply they've got a model that performs well, and is also very fast or that their 'TPUs' outperform the A100 quite a bit, either of which would be a big step forward.