AI

Computex: Nvidia plans faster rollout, lists Blackwell partners

Nvidia already is dominating the market for AI chips, but it is not satisfied. To adequately meet demand, the company is speeding up its pace of pushing AI chips and AI factories–that is, AI-boosted data centers–out into the world.

At Computex this week in Taipei, Taiwan, Nvidia co-founder and CEO Jensen Huang said that his company is moving from a two-year schedule of bringing new chips to market to a “one-year rhythm.” 

That means Nvidia’s Blackwell platform, announced just a few months ago and due out next year–and which Huang called “the most complex, highest-performance computer the world has ever made”--will be quickly followed by the Blackwell Ultra platform next year, and then by another brand new GPU-CPU platform called Rubin in 2026. Huang briefly showed the Rubin components on a screen during his keynote. It will include a sixth-generation NVLink Switch technology (Nvidia’s just announced the fifth-gen NVLink) bridging the Rubin GPU with an Arm Vera CPU.

“All of these chips that I'm showing you here are all in full development, and the rhythm is one year at the limits of technology, and all 100% architecturally compatible [backward-compatible with installed base systems via software],” Huang said.

“Our basic philosophy is very simple: build the entire data center scale disaggregated and sell to you in parts on a one-year rhythm,” Huang said. “And we push everything to technology limits. Whatever TSMC process technology we use we will push it to the absolute limits, whatever packaging technology, push it to the absolute limits, whatever memory technology, push into absolute limits. SERDES technology, optics technology, everything is pushed to the limit.”

The acceleration is happening to meet a demand for AI that is being driven from virtually every device and computing interaction, a trend that will require higher-performance but also more power-efficient AI training and inference in data centers and elsewhere, Huang said.

“The days of millions-of-GPU data centers are coming, and the reason for that is very simple,” he said. “Of course, we want to train much larger models, but… in the future, almost every interaction you have with the internet or with a computer will likely have a generative AI running in the cloud somewhere… Some of it will be on-prem, some of it is on your device, and a lot of it could be in the cloud.”

That necessitates platforms like Nvidia’s 72-GPU DGX Blackwell succeeding its still-pretty-new 8-GPU DGX H100 based on the Hopper architecture

Huang also announced a long list of computer and server manufacturers that are creating Blackwell-based products–ranging from cloud-based to on-premises to embedded and edge AI systems–to allow enterprises to build AI factories and AI-boosted data centers. Those companies include ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn.

Speeding up platform introduction also means that Nvidia needs to help system developers and product manufacturers accelerate development time. Huang said Nvidia is now doing that by enabling its MGX modular reference design platform to support Blackwell products. This includes the new Nvidia GB200 NVL2 platform designed for mainstream large language model inference, retrieval-augmented generation and data processing.

The MGX provides computer manufacturers with a reference architecture to quickly and cost-effectively build more than 100 system design configurations, Huang said. Manufacturers start with a basic system architecture for their server chassis, and then select their GPU, DPU and CPU to address different workloads. To date, more than 90 systems from over 25 Nvidia partners (including–surprise–AMD and Intel) have been released or are in development that leverage the MGX reference architecture, up from 14 systems from six partners last year. Nvidia claimed that using MGX can help slash development costs by up to three-quarters and reduce development time by two-thirds, to just six months.

Huang said Nvidia’s aim is to develop an entire AI factory supercomputer platform in an open way so that it can then be disaggregated and offered to partners to develop their own configurations for their own markets and customers.

“The reason for that is because all of you could create interesting and innovative configurations and all kinds of different styles and different data centers for different customers in different places–some of it for edge, some for telco,” he said. All of the different innovations are possible if we make the systems open.”