xAI has taken tripled the growth rate of AI data center compute. It has gone from 5X each year to over 15X each year. This next level improvement is just the start of making the ultimate business technology flywheel. Improving AI and AI data centers with AI which then makes better AI data centers which makes better AI.
It is still continuing with a 4X in power and 11X in compute in the first 6 months of this year.
This will 11X the compute used for the Grok 3 training.
This increase in compute, energy and resulting AI capability is the critical path.
Tesla and XAI will combine capabilities into fully autonomous self improving AI for totally exponential AI. They will close the loop.
In the meantime, the amazing human teams at XAI and Tesla have accelerated and will continue to acceleration of the AI data center build. Build the data centers faster and improve all the layers of software and all of the synthetic and video data.



What XAI has already achieved
Speed of data center construction has been taken to the next level. It is about 3-5 times faster than competitors.
Coherent shared memory has been achieved with 200,000 Nvidia H100/H200 chips and this hardware microsecond ethernet solution scale to a million chips and beyond. 1 million B200s will be 20 Zettaflops of compute.
They have the best AI and it is improving rapidly. The 11X in compute by mid 2025 with 400K chips with half B200s will give a Grok 4 trained and released by around August 2025.
They are applying AI for automated reinforcement learning as others like Deepseek are doing.
XAI is applying AI for synthetic data generation. Data must scale with compute to get scaling of the AI.
Nvidia has applied AI to chip design for B200 and B200 variants. This is far more complicated than just asking an LLM a question. It involves an architecture and system that is more like the solve of protein folding by Deepmind or the years of work on FSD by Tesla to try to reach robotaxi.
FSD/Robotaxi scale multi-year projects are what is needed to automate and accelerate and improve major sections of the AI software and AI development stacks. Removing CUDA coding layers with more direct to hardware can unlock 100X performance gains. Doing this will be very hard. But the prize and benefit is huge. The same AI data center could get 100X the performance and efficiency.
100X gains from bigger and better data centers can see 50% drops in the loss functions and likely new emergent capabilities.
The end point is self replicating, self improving and self learning AI. But even with large human teams of experts enhanced to accelerate the improvement of the critical parts of improving the enablement of AI there will be a constantly accelerating development.
Before that point is the creation of and improvement of an AI industrial flywheel. Internet companies have long had the goal of creating a business-technology flywheels (examples below).
Improving AI data centers for improved AI for improved AI data centers. This is the ultimate flywheel. It will be rapid and constantly accelerating improvement of AI.


It is not just the machine learning flywheel. It is improving and expanding the data with synthetic and video data.
It is improving all aspects of the data center and chips. Upgrading the substrate on which the intelligence resides.
Everything. Building it all faster. Improving it all faster. Fully automating each step. Self learning. Faster. Better. Faster. Better.

By 2030, there could be multiple 10 gigawatt data centers, 100 million cars and Teslabots with AI5/AI6/AI7 chips that form a distributed inference cloud with 30-60 gigawatt hours of compute and the data centers could have AI optimization for 100X to 1000X the efficiency.
XAI and Tesla could have followed Nvidia to chip design and optimization and design of other sections like networking, memory etc…




Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.