Accelerated computing could shake up data center capex, power demand over next decade

Inside Infra 26 May

Accelerated computing could shake up data center capex, power demand over next decade

CPUs cannot carry the load on big data-demanding applications. Data center executives and their infrastructure fund backers are drawing up new capex plans for accelerated computing hardware that's up to the task.

As artificial intelligence-related applications, and revenues, continue to grow, the hype and skepticism surrounding the phenomenon are converging to spur billions of dollars of capex by data center operators on supportive accelerated computing hardware over the next decade.

“You're seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world's data centers and build it out as accelerated computing,” said Jen-Hsun "Jensen" Huang, CEO of Nvidia Corporation, on an earnings call this week.

California-based Nvidia is a computer hardware and software company that supplies its goods to industries including the data center business.

“You'll have a pretty dramatic shift in the spend of the data center from traditional computing to accelerated computing with smart NICs, smart switches, [and] of course, GPUs, and the workload is going to be predominantly generative AI,” Huang said. “Over the last four years, [and] call it … USD 1trn worth of infrastructure installed, it's all completely based on CPUs and dumb NICs. It's basically unaccelerated.”

CPUs, or central processing units, are hardware that perform complex control functions, but have difficulty processing large quantities of data, creating latency. To accommodate data-hungry applications and tools, data center operators and enterprises are installing accelerated computing hardware — graphics processing units, tensor processing units and adaptive computing, for example — onto their servers, according to industry literature. The “parallel processing” assists the CPU in crunching AI and large language models throughput.

For several years now, infrastructure investors have poured billions of dollars into the data center industry because of its contractual cash flows and barriers to entry. Now, facing tech obsolescence risk, the industry is looking to raise money to grow computing power that keeps up with data demand.

“In the last decade building public cloud, USD 5trn was invested — that's a lot of money,” said DigitalBridge Group CEO Marc C. Ganzi during a keynote speech at the Connect (X) conference a couple of weeks ago. “We're going to spend USD 6trn to USD 7trn building AI. AI requires three times more the compute and it requires more infrastructure and more facilities than public cloud, just to give you a sense of what's coming.”

Global data center capex is projected to grow 11% by 2027, reaching USD 400bn, market research firm Dell’Oro Group said earlier this year. In 2021, the firm estimated that server spending would grow at a compound annual growth rate of 11% over five years — comprising nearly half of data center capex by 2025. CPU refresh cycles, accelerated computing and edge computing would drive those investment dollars.

Indeed, AI could drive capex for the entire communications infrastructure ecosystem.

“If you're going to deploy artificial intelligence and you're going to deploy it in a low latency environment, the only way you do it is by putting that network infrastructure on the edge,” Ganzi said. “So whether it's fiber to the suburbs, to a tower, to a smaller data center in a secondary and tertiary market, these workloads are absolutely moving to the periphery of the network … the supremacy fight for AI is won on the perimeter of the network.”

While overall data center capex is expected to grow, it might not be linear as accelerated computing is expected to make data center power demands more efficient.

“The fact that accelerated computing is so energy efficient [means] the budget of the data center will shift very dramatically towards accelerated computing,” Huang said.

The chief executive said at an industry conference earlier in the month that Nvidia’s DGX system — its proprietary GPU — could replace “tens of thousands” of CPU servers, reducing power and costs “by an order of magnitude.”

Another difficulty the industry faces is procuring the power itself. Lead times for generators and substation transformers are around two years, said Novva Data Centers CEO Wes Swenson in an interview a couple of weeks ago.

Enabling AI software in facilities with accelerated computing could curb those needs.

“Eventually, clients will start to bifurcate their compute loads and decide what needs to go [into] mission-critical data centers with generators and data centers without generator backups,” Swenson said. “(AI) will actually tell the data which data center to go to.”

The CEO predicted that the industry will begin to accommodate such systems within the next two years.

Infralogic is the only infrastructure service to combine news, data, and predictive analytics to help you win new deals.

Request a demo