NVIDIA’s business model in data center
In the previous part of this series, we saw that NVIDIA (NVDA) is playing a critical role in the accelerated data center market. Its Tesla GPUs (graphics processing units) accelerate the server speed and reduce the data center cost. This is beneficial as the advent of deep learning requires an immense amount of processing speed.
Let us see how NVIDIA is using its business model in the accelerated data center market.
Scaling Tesla platform
NVIDIA has leveraged its Tesla platform for the accelerated data center market and has launched specific hardware for different applications.
- The company has launched the M40 and M4 deep-learning processors for hyperscale and cloud.
- It offers K80 accelerators for HPC (high-performance computing). It is now launching a revolutionary architecture, Pascal for HPC. P100 is the first GPU on this architecture, and it is being built on TSMC’s (TSM) 16 nm (nanometer) FinFET (fin-shaped field effect transistor) technology.
- NVIDIA is going a step ahead and has unveiled the world’s first deep-learning supercomputer—DGX1 for AI (artificial intelligence).
For developers, NVIDIA offers ComputeWorks SDK (software development kit).
Network partners for data center
NVIDIA deploys its data center products through two networks.
- The first is server manufacturers such as Dell, IBM (IBM), and Hewlett-Packard Enterprise (HPE), which integrate Tesla platform into their data center servers. Over 400 server models are equipped with Tesla.
- The second is the cloud service providers such as Microsoft’s (MSFT) Azure, Amazon’s (AMZN) EC2, and Alibaba’s (BABA) Aliyun cloud service in China (MCHI).
NVIDIA sees several growth opportunities for its deep learning products. We will look at these opportunities in the next part of this series.