Tesla

New Tesla AI data center to use 130 MW in 2024, 500 MW in 2025

Published

on

Tesla is preparing a new AI data center at Gigafactory Texas and explained the power and cooling requirement for this infrastructure aimed at Full Self Driving (FSD) and improved AI use cases.

Previous drone footage of the under-construction building shows new cooling hardware installation. The fans shown in the video are placed horizontally resembling a huge GPU.

Advertisement

These will cool FSD and another AI cluster in the data center. The new setup requires roughly 130 MegaWatts (MW) of power and an equal cooling capacity this year. However, the company would improve these two aspects over 500 MW in the next 18 months or so.

Tesla’s under-construction Gigafactory, Texas (Image Source – X)

To build a new data center, Tesla has already procured a large quantity of Nvidia AI chipset and plan to add up to 50,000 H100 chips by the end of this year. These chips will help the company to advance the computing system and enable fast training of new models for Self-driving, robotics, and other AI products.

The new computing system will use around half of Tesla AI hardware and the other half consisting of Nvidia and other chipsets. It has been speculated that the EV maker would move to its own chipsets but that doesn’t seem possible with Hardware 4. Therefore, It would remain focused on using Nvidia for the time being.

Advertisement

The situation might change with Hardware 5. This tech will be renamed AI5 to reflect advanced AI for new self-driving improvements.

Tesla AI5 may be released in the second half of 2025 and has 10 times more capability than a Hardware 4 computer. The electric vehicle maker confirmed that it will design the whole software stack for AI 5 from the ground.

Advertisement

(source)

Advertisement
Exit mobile version