According to Nvidia CEO Jensen Huang, the ever-increasing speed and efficiency of processors mean that OpenAI's Sam Altman won't need$7 trillion for his AI chip initiative.
"You can't assume that you will buy more computers. You have to also assume that the computers are going to become faster and therefore the total amount that you need is not as much," Huang said on stage at the World Governments' Summit in Dubai.
"If you just assume computers aren't going to get any faster," Huang continued, "you might come to the conclusion that we need 14 planets, three galaxies and four more suns to fuel all this, but computer architecture continues to advance."
The global semiconductor market will hit$1 trillion by 2023, up from$556 billion in 2021, according to Digitimes Research. Digitimes expects servers and AI as major growth drivers for the industry -but not to the tune of$7 trillion -because of a projected saturation in the PC and notebook space.
But even these numbers from Digitimes might need to be refreshed, with Huang saying on stage that Nvidia will have a$2 trillion install base by the end of the decade.
"There's currently about a trillion dollars' worth of install base of data centers. Over the course of the next four or five years, we'll have$2 trillion worth of data centers that will be powering software around the world," Huang said.
Some experts are also concerned that this massive AI land rush will require untold amounts of energy, and the natural resources required will be "mind-boggling."
The AI industry is facing a significant challenge due to a shortage of AI chips or GPUs, hindering its growth.
Huang has previously said that he was working to build out a supply of GPUs for Western-allied countries like Japan that are looking to develop "sovereign" AI capabilities.
But what if GPUs weren't the only kind of chip that could bring about the future of AI?
Huang said on stage that most of the world's major tech companies, including Google and Meta, were building their own proprietary AI chips but Nvidia's advantage is that it's the only architecture that spans the gamut from cloud to servers, to edge computing.
"That's what makes Nvidia unique. Our CUDA architecture has the ability to adapt to anything that comes along," he said. "Because it's available anywhere, any researcher can get access to Nvidia GPUs and invent the next generation of AI."
Huang said that this accessibility means that Nvidia is key to "democratizing AI."
Nvidia has been developing "China-friendly" versions of its AI-ready GPUs in order to comply with export restrictions from the US government and remain in the market.
Huang has previously warned that there are dozens of competitors in China taking advantage of the US restrictions on exporting Nvidia GPUs to the country and developing alternatives.