Musk recently gave investors a tour of his proposed “gigafactory of compute,” according to The Information.
According to The Information, Elon Musk disclosed to investors earlier this month that his business, xAI, intends to construct a supercomputer by the fall of 2025. This supercomputer would be for the purpose of powering a future version of its Grok chatbot that is more intelligent. In order to construct this supercomputer, which Musk reportedly alluded to as a “gigafactory of compute,” tens of thousands of NVIDIA H100 GPUs would be required, and the construction of this supercomputer would cost billions of dollars. A recent statement made by Musk indicated that the third iteration of Grok will require a minimum of 100,000 of these processors. This is a fivefold increase in comparison to the 20,000 GPUs that are reportedly being used for training Grok 2.0.
Within the presentation, Musk reportedly informed investors that the projected GPU cluster will be at least four times larger than anything that is currently being utilized by xAI competitors. This information was obtained from The Information. Grok is now at version 1.5, which was launched in April. It is now being advertised as a program that can analyze visual information such as diagrams and images in addition to text. Beginning earlier this month, X began providing premium subscribers with news summaries that were generated by artificial intelligence and powered by Grok.