X
<

Can Nvidia Maintain Its High Growth Momentum in Fiscal 2018?

PART:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Part 10
Can Nvidia Maintain Its High Growth Momentum in Fiscal 2018? PART 10 OF 16

Nvidia Joins the Open Compute Project

Analysts’ estimates for deep learning 

In the previous part of this series, we saw that Nvidia (NVDA) is looking to boost its Data Center revenues by increasing the adoption of AI (artificial intelligence). To accelerate AI adoption, Nvidia joined the OCP (Open Compute Project), which makes server designs available to the public.

A research report by Tractica noted that a majority of deep learning software programs are being written for open source structures. This trend could drive GPU (graphics processing unit) spending for deep learning projects from $43.6 million in 2015 to $4.1 billion by 2024.

Nvidia Joins the Open Compute Project

Interested in NVDA? Don't miss the next report.

Receive e-mail alerts for new research on NVDA

Success! You are now receiving e-mail alerts for new research. A temporary password for your new Market Realist account has been sent to your e-mail address.

Success! has been added to your Ticker Alerts.

Success! has been added to your Ticker Alerts. Subscriptions can be managed in your user profile.

Nvidia’s contribution to the Open Compute Project

Nvidia has entered the OCP with its new hyperscale GPU accelerator HGX-1, which it developed in collaboration with Microsoft (MSFT) and Foxconn subsidiary Ingrasys. Nvidia would release HGX-1 as part of Microsoft’s Project Olympus server design for hyperscale data centers.

The HGX-1 comprises up to eight Pascal-based Tesla P100 GPUs and can be linked to CPUs (central processing units) through a switching procedure based on Nvidia’s NVLink. The switching procedure allows data centers to use one or all of the GPUs depending on their workload.

Project Olympus server design allows data centers to connect up to four HGX-1 GPUs at the same time, which equates to 32 Tesla P100 GPUs. The server design allows the use of PCIe ports, allowing it to work with Intel’s (INTC) Xeon Skylake CPUs and Advanced Micro Devices’s (AMD) 64-core Naples CPUs.

Nvidia’s director of Tesla products, Roy Kim, stated that its NVLink-based switching procedure sets it apart from competitors as it allows data centers to choose from a range of CPU and GPU configurations that meet their specific workload needs.

What does Nvidia aim to achieve from OCP?

Nvidia’s OCP initiative would not earn it any profits, as the design specifications are made available to the public. However, it would help Nvidia set a new standard for cloud computing.

Kim stated that AI is currently at the stage where PC (personal computer) motherboards were in 1995. At that time, Intel and Microsoft created an industry-standard ATX (Advanced Technology eXtended), which is still used. As a result, Intel dominated the PC processor market.

Nvidia currently dominates the AI market, but there are different designs for AI. As AI adoption increases, there is a need for an industry standard. Microsoft and Nvidia believe that HGX-1 is ideal for a new standard.

However, Nvidia needs the support of major industry players to establish HGX-1 as a new standard. OCP presents Nvidia with an opportunity to gain the support of industry leaders.

Next, we’ll look at Nvidia’s Automotive business.

X

Please select a profession that best describes you: