uploads///A_Semiconductors_NVDA_deep learning growth opportunity

What Opportunities Does Deep Learning Offer NVIDIA?


Dec. 4 2020, Updated 3:53 p.m. ET

Stages of deep learning

In the previous part of this series, we saw that GPU[1.graphics processing unit]-accelerated computing is helping supercomputers solve many complex problems efficiently. The cost and time savings brought by GPUs have encouraged many cloud companies to use these chips for their deep learning tasks.

Deep learning happens in two stages. The first stage is training, where DNNs (deep neural networks) for generic application–specific verticals are built using a tremendous amount of data. The second stage is inferencing, where the computer uses the training to act in the real world. NVIDIA (NVDA) sees strong growth opportunities in both functions.

Article continues below advertisement

Opportunity for NVIDIA in deep learning training

At present, NVIDIA’s GPUs are used by Amazon (AMZN) for its digital assistant, by Microsoft (MSFT) for image recognition, and by Google (GOOG) and Baidu (BIDU) for voice command.

These companies are using NVIDIA’s GPUs to train DNNs, as GPUs reduce training time from months to weeks. As GPU technology advances, training time could reduce from weeks to days. At present, to perform deep learning training, a Pascal GPU-accelerated server takes about 500 hours less than a CPU[2.central processing unit]-only server.

At NVIDIA’s 2017 Investor Day, Enterprise senior vice president Shankar Trivedi stated that customers are currently buying about 1.4 exaflops of training computing. He expects this number to increase to 55 exaflops by 2020, given the growth in areas where consumers use DNNs. He sees deep learning training as a ~$11 billion market opportunity for NVIDIA.

Article continues below advertisement

Opportunity for NVIDIA in inferencing

The second stage in deep learning is inferencing. If a consumer gives a voice command to a digital assistant, it is sent to a data center for inferencing and then sent back to the user’s device. Currently, 100% of inferencing happens on CPU-powered servers.

At the 2017 GPU Technology Conference, NVIDIA launched its Pascal GPU-accelerated inferencing solution, TensorRT. NVIDIA CEO Jensen Huang showed that Pascal-based TensorRT replaced 15–16 racks with just one rack, bringing up to $2 million in savings. It delivered inferencing within seven milliseconds for a trained neural network.

GPU-powered inferencing will not only serve generic applications such as voice or image recognition, but domain-specific capabilities such as understanding medical dictation and advanced video analytics. AI could expand to other verticals, such as smart cities, retail, manufacturing, and automotive. Shankar Trivedi sees inferencing as a $15 billion opportunity for NVIDIA. Next, we’ll look at AI’s various applications.


More From Market Realist