What factors drove NVIDIA’s data center revenue?
NVIDIA (NVDA) has continued to witness strong growth in the gaming sector as eSports, virtual reality, and its Pascal-based GPUs (graphics processing unit) have driven demand.
However, the key highlight of its fiscal 2017 was its Data Center segment, which grew more than threefold from just $97 million in fiscal 4Q16 to $296 million in fiscal 4Q17, making it the second-largest business segment after Gaming.
The growth was driven by the increasing adoption of NVIDIA’s Tesla P100 GPUs and the DGX-1 supercomputer by CSPs (cloud service provider) and other high-performance computing providers. NVIDIA’s Tesla GPUs enable CSPs to boost processing power by fivefold and reduce costs 60%.
Transformation of AI
AI (artificial intelligence) was first adopted by hyperscale data centers such as Microsoft (MSFT), Facebook (FB), and Google (GOOG), which used AI for image recognition and voice processing. AI is now moving to the enterprise space as companies in healthcare, retail, and finance begin to use deep learning to solve problems and automate processes.
The increasing adoption of AI has encouraged several CSPs to offer AI-as-a-service.
Google is offering deep-learning capabilities on the Google Cloud Platform using NVIDIA’s Tesla K80 GPUs. Users can use up to eight GPUs for their deep-learning operations for an hourly charge of $0.70 for each GPU in the United States and $0.77 for each GPU in Asia and Europe.
Even Amazon (AMZN) Web Services offers deep-learning capabilities, allowing users to use up to 16 of NVIDIA’s Tesla K80 GPUs. Microsoft’s Azure offers similar support for up to four of NVIDIA’s slightly older GPUs.
China’s web search engine Baidu (BIDU) is also offering deep-learning capabilities on the Baidu Cloud using NVIDIA’s Tesla P40 GPUs and deep learning software.
Users use this service for both training and inference acceleration for open-source deep learning frameworks such as TensorFlow and PaddlePaddle.
China’s Tencent Cloud will soon offer deep learning capabilities on its public cloud platform using NVIDIA’s Tesla P100, P40, and M40 GPU accelerators and deep learning software. The chip supplier stated that the cloud servers would integrate up to eight GPU accelerators in 1H17.
NVIDIA and Microsoft have developed the hyperscale GPU accelerator framework HGX-1, which will feature eight Tesla GPUs and will be able to connect to the CPU (central processing unit) depending on the workload. The companies plan to make the open-source, scalable HGX-1 design the standard architecture for AI cloud computing.
IBM (IBM) will soon offer GPU support on its Bluemix cloud, allowing its users to add two NVIDIA Tesla P100 GPUs. This will provide up to 4.7 teraflops of double-precision performance and 16 gigabytes of memory. IBM is also working with NVIDIA in the supercomputer space. IBM’s Power8 CPUs and NVIDIA’s Tesla P100 GPUs will be used in two new supercomputers for the U.S. Department of Energy.
Next, we’ll see how NVIDIA is supporting the adoption of AI by enterprises and supercomputers.