Intel’s Response to NVIDIA’s Tesla Graphics Processing Unit
Intel’s response to NVIDIA’s Tesla GPU
Previously in this series, we saw that Intel (INTC) is experimenting with several types of AI (artificial intelligence) platforms that can compete with NVIDIA’s (NVDA) GPUs (graphics processing units). Intel recently announced an AI chip for data centers, developed by Nervana Systems, which it acquired in 2016.
In October 2017, Intel announced its first-generation processor for neural networks, the Nervana NNP (Neural Network Processor), formerly codenamed Lake Crest. This processor, which it developed in conjunction with Facebook (FB), is Intel’s response to NVIDIA’s Tesla GPU.
Interested in FB? Don't miss the next report.
Receive e-mail alerts for new research on FB
At the time of its acquisition, Nervana had developed neon, an open-source, deep-learning framework, and Nervana Cloud, which was optimized to run on NVIDIA’s Titan X GPUs. It was in the process of developing a custom ASIC (application-specific integrated circuit) called the Nervana Engine, which it claimed would deliver ten times better performance than NVIDIA’s Maxwell GPUs.
The Intel Nervana NNP is based on the Nervana Engine. Even if the ASIC holds true to the above claim, it would still be behind NVIDIA, which has already moved two generations ahead of Maxwell with its Volta GPUs. The Nervana NNP and NVIDIA’s Tesla V100 have some data link and memory similarities. Like the Tesla V100, which supports six NVLinks, the Nervana NNP has six bidirectional data links. Moreover, both chips use HBM (high-bandwidth memory).
Intel claims that the NNP could promote AI applications for several industries, including healthcare, automobiles, and weather forecasting. In a blog post, Intel chief executive officer Brian Krzanich stated that the NNP’s scalability, numerical parallelism, and bidirectional data transfer would maximize the amount of data processed and deliver greater insights to customers.
The Nervana NNP rollout
Intel plans to ship the first batch of the NNP to its partners by the end of 2017. Customers can access NNPs in two ways: they can either buy NNPs for their own data centers or rent access to NNPs via Intel’s cloud data centers, similar to the service provided by NVIDIA’s GPU Cloud. According to Fortune, general manager of Intel AI Products Group Naveen Rao has stated that the Nervana NNP is a “starting point,” and this product could develop into an ecosystem with plug-and-play modules.
While Nervana technology is being used for data center AI, Movidius technology is being used to bring AI to the edge. We’ll look at Movidius technology in the next part.