NVIDIA Blackwell GPUs Sold Out: Demand Surges, What’s Next?

Nvidia Blackwell Gpus Sold Out: Demand Surges, What’s Next?

NVIDIA Blackwell GPUs Sold Out: Demand Surges, What’s Next?

Home » News » NVIDIA Blackwell GPUs Sold Out: Demand Surges, What’s Next?
Table of Contents

Nvidia continues to journey the AI wave as the corporate sees unparalleled call for for its newest Blackwell GPU next-generation processors. The provide for the following 365 days is bought out, Nvidia CEO Jensen Huang instructed Morgan Stanley analysts all the way through an traders’ assembly.

A equivalent state of affairs passed off with Hopper GPUs a number of quarters in the past, Morgan Stanley Analyst Joe Moore identified.

Nvidia’s conventional consumers are riding the overpowering call for for Blackwell GPUs, together with primary tech giants akin to AWS, Google, Meta, Microsoft, Oracle, and CoreWeave. Every Blackwell GPU that Nvidia and its production spouse TSMC can produce over the following 4 quarters has already been bought by means of those firms.

The overly top call for seems to solidify the continuing enlargement of Nvidia’s already ambitious footprint within the AI processor marketplace, even with festival from competitors akin to AMD, Intel, and quite a lot of smaller cloud-service suppliers.

“Our view continues to be that Nvidia is likely to gain share of AI processors in 2025, as the biggest users of custom silicon are seeing very steep ramps with Nvidia solutions next year,” Moore mentioned in a shopper word, in line with TechSpot. “Everything that we heard this week reinforced that.”

The information comes months after Gartner predicted that AI chip earnings will skyrocket in 2024.

Designed for massive-scale AI deployments

Nvidia offered the Blackwell GPU platform in March, hailing its talent to “unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI—all emerging opportunities for Nvidia.”

The Blackwell contains the B200 Tensor Core GPU and GB200 Grace “super chip.” These processors are designed to deal with the difficult workloads of enormous language style (LLM) inference whilst considerably lowering power intake, a rising worry within the business. At the time of its unencumber, Nvidia mentioned the Blackwell structure provides features on the chip degree to leverage AI-based preventative upkeep to run diagnostics and forecast reliability problems.

“This maximizes machine uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks and even months at a time and to cut back working prices,’’ the corporate mentioned in March.

SEE: AMD Reveals Fleet of Chips for Heavy AI Workloads

Memory problems stay a query

Nvidia resolved packaging problems it first of all confronted with the B100 and B200 GPUs, which allowed the corporate and TSMC to ramp up manufacturing. Both B100 and B200 use TSMC’s CoWoS-L packaging, and there are nonetheless questions on whether or not the sector’s greatest chip contract maker has sufficient CoWoS-L capability.

It additionally is still noticed whether or not reminiscence makers can provide sufficient HBM3E reminiscence for modern GPUs like Blackwell because the call for for AI GPUs is skyrocketing. In explicit, Nvidia has no longer but certified Samsung’s HBM3E reminiscence for its Blackwell GPUs, every other issue influencing provide.

Nvidia stated in August that its Blackwell-based merchandise have been experiencing low yields and would necessitate a re-spin of a few layers of the B200 processor to strengthen manufacturing potency. Despite those demanding situations, Nvidia seemed assured in its talent to spice up manufacturing of Blackwell within the fourth quarter of 2024. It expects to send a number of billion bucks’ price of Blackwell GPUs within the ultimate quarter of this yr.

The Blackwell structure could also be probably the most advanced structure ever constructed for AI. It exceeds the calls for of these days’s fashions and prepares the infrastructure, engineering, and platform that organizations will wish to deal with the parameters and function of an LLM.

Nvidia isn’t just operating on computing processing to fulfill the calls for of those new fashions — it is usually concentrating at the 3 greatest obstacles proscribing AI these days: power intake, latency, and precision accuracy. The Blackwell structure is designed to ship unparalleled efficiency with higher power potency, in line with the corporate.

Nvidia reported that knowledge heart earnings in the second one quarter was once $26.3 billion, up 154 % from the similar quarter 365 days prior.

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 
share this article.

Enjoying my articles?

Sign up to get new content delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.
Name