uploads///A_Semiconductors_NVDA_inferencing opportunity

Cloud and Internet Service Companies: NVDA’s Data Center Drivers



NVIDIA’s key growth drivers in the data center space

NVIDIA’s (NVDA) data center business took flight in fiscal 2016 as more and more companies adopted its GPUs (graphics processing unit) for AI purposes.

Cloud companies initially adopted NVIDIA’s Volta GPU for internal consumption. They’re now making it available to the public through their GPU cloud services. Microsoft (MSFT) Azure made NVIDIA’s Tesla V100 publicly available in fiscal Q1 2019, and Google (GOOG) Cloud made it available in May. Many hyperscale and consumer Internet companies have also adopted Volta.

In the HPC space, weak demand from major supercomputing projects was offset by strong demand from enterprises. Enterprises across various verticals, from manufacturing to oil and gas, are adopting Volta.

Article continues below advertisement

AI inference

NVIDIA also saw strong demand in AI inference. Inference GPU shipments to cloud service providers rose over 100% sequentially in fiscal Q1 2019. This strong growth came after the company announced its TensorRT 4 AI inference accelerator software at the 2018 GPU Technology Conference.

With TensorRT 4, users can accelerate common deep-learning inferences, such as speech recognition, recommendation, and computer vision, up to 190 times faster than with CPUs (central processing unit). This capability expands TensorRT’s use cases and broadens NVIDIA’s market reach to more than 30 million hyperscale servers worldwide.

NVIDIA continues to expand its AI market by exploring new verticals. Next, we’ll look at the developments the company is making in the data center space.


More From Market Realist