← Back to context

Comment by Workaccount2

1 day ago

Deepmind gets to work directly with the TPU team to make custom modifications and designs specifically for deepmind projects. They get to make pickaxes that are made exactly for the mine they are working.

Everyone using Nvidia hardware has a lot of overlap in requirements, but they also all have enough architectural differences that they won't be able to match Google.

OpenAI announced they will be designing their own chips, exactly for this reason, but that also becomes another extremely capital intensive investment for them.

This also doesn't get into that Google also already has S-tier dataceters and datacenter construction/management capabilities.

> Deepmind gets to work directly with the TPU team to make custom modifications

You don't think Nvidia has field-service engineers and applications engineers with their big customers? Come on man. There is quite a bit of dialogue between the big players and the chipmaker.

  • They do, but they need to appease a dozen different teams from a dozen different labs, forcing nvidia to take general approaches and/or dictating approaches and pigeonholing labs into using those methods.

    Deepmind can do whatever they want, and get the exact hardware to match it. It's a massive advantage when you can discover a bespoke way of running a filter, and you can get a hardware implementation of it without having to share that with any third parties. If OpenAI takes a new find to Nvidia, everyone else using Nvidia chips gets it too.