Comment by andsoitis
1 day ago
It was recently reported by Reuters:
XAI is training Grok on 230,000 graphics processing units, including Nvidia's 30,000 GB200 AI chips, in a supercluster, with inference handled by the cloud providers, Musk said in a post on X on Tuesday. He added that another supercluster will soon launch with an initial batch of 550,000 GB200 and GB300 chips.
I suppose one could argue this training isnt capex, but I was also under the impression that xAI was building sites for housing their AI.
No comments yet
Contribute on Hacker News ↗