NVIDIA’s data center revenue
NVIDIA (NVDA) has been driving GPU (graphics processing unit) computing across various markets such as AI (artificial intelligence) and automotive. Gaming is NVIDIA’s present focus, but AI will likely be the company’s future.
Over the past three years, NVIDIA has increased its data center business tenfold, from $200 million in annual revenues to nearly $2 billion in annual revenues. In fiscal 3Q18, NVIDIA’s data center revenues rose 20% sequentially and 109% YoY (year-over-year) to $500 million. This sequential growth was driven by the increasing adoption of its Tesla V100 GPU by data centers.
At the heart of this growth is its CUDA software, which makes its GPU hardware easy to use in various applications. Intel (INTC) and Advanced Micro Devices (AMD) are looking to tap the AI space, but they lack software support as strong as CUDA.
NVDA versus Intel
Intel is now offering the following five AI chips to compete with NVIDIA’s Tesla GPUs:
- the Nervana Neural Network Processor
- the Myriad X
- Altera FPGA (field programmable gate array)
- the traditional Core processor
- the Xeon Phi processor
However, none of these chips have achieved the success of NVIDIA’s Tesla GPU. NVIDIA chief executive Jen-Hsun Huang has noted that a company that Intel’s focus on four or five different architectures has divided its software support. It’s difficult to support so many architectures over the longer term, and Huang believes that Intel will have to scrap 80% of its architecture.
Meanwhile, NVIDIA has a seven-year lead with its GPU architecture and focuses all of its CUDA software support on this single architecture. Huang acknowledges that FPGAs and other accelerators are better than GPUs in certain specific tasks, but he has noted that a GPU is easier to use, which makes it the preferred choice among customers.
NVIDIA expands its AI reach
NVIDIA has a head start in the AI space, thanks to the growing adoption of its Tesla GPUs by several enterprises and CSPs (cloud service providers). The company has expanded its data center division beyond HPC (high-performance computing) and DL (deep learning) to inference, GPU-as-a-service, and domain-specific applications like finance and healthcare.
In the next part, we’ll explore these five sub-segments in greater detail.