American businessman Elon Musk announced on his account on the social network X that his AI startup xAI has launched “the world’s most powerful AI training cluster.” According to Musk, this system will provide “a significant advantage in training the world’s most powerful AI by any metric by December of this year.”
“The system with 100,000 liquid-cooled H100s on a single RDMA bus is the most powerful AI training cluster in the world,” Musk said in his post. It is unknown whether the businessman personally participated in the launch of the AI supercomputer, but the published photo shows that he at least communicated with xAI engineers during the setup of the equipment.
Earlier this year, the media reported on Musk’s ambition to launch the so-called “gigafactory for computing,” a giant data center with the world’s most powerful AI supercomputer, by the fall of 2025. Starting to form a cluster for training AI required purchasing a huge number of Nvidia H100 accelerators. It seems that the businessman did not have enough patience to wait for the release of the H200 accelerators, not to mention the upcoming B100 and B200 models of the Blackwell generation, which are expected to be released before the end of this year.
Later, Musk wrote that the AI supercomputer will be used to train the most powerful AI by all indicators. This is probably the Grok 3 algorithm, the training stage of which should be completed by the end of this year. Interestingly, the AI supercomputer located in the Memphis data center is apparently significantly superior to its counterparts. For example, the Frontier supercomputer is built on 27,888 AMD accelerators, Aurora uses 60 thousand Intel accelerators, and Microsoft Eagle uses 14,400 H100 accelerators from Nvidia.
If you notice an error, select it with your mouse and press CTRL+ENTER.
Source: 3dnews.ru