← Back to context

Comment by TheMrZZ

9 hours ago

> A single TPU 8t superpod now scales to 9,600 chips and two petabytes of shared high bandwidth memory, with double the interchip bandwidth of the previous generation. This architecture delivers 121 ExaFlops of compute and allows the most complex models to leverage a single, massive pool of memory.

This seems impressive. I don't know much about the space, so maybe it's not actually that great, but from my POV it looks like a competitive advantage for Google.

it is. itll still not create AGI without some breakthrough in instruction vs data separation of concerns

  • You can park a lot there. No offence but I love how AGI doesn't mean anything. It used to be that AI was a goal post. Now it is AGI. We could use characters from sci-fi culture to describe milestones. In order to achieve robocop level, we must solve the instruction vs data problem.