Intel in the Artificial Intelligence market
According Jackdaw Research analyst Jan Dawson, although Intel (INTC) is committed to making it big in the AI (Artificial Intelligence) market, it is a little late. Also, it does not have the same power as it has in the PC (personal computer) and server market to set standards. However, it is speeding up its AI efforts and has several new products lined up for release in 2017 and 2018.
Its AI portfolio comprises the standard Xeon, the higher-performance Xeon Phi, Xeon coupled with Lake Crest, and Xeon coupled with the Arria 10 FPGA (field-programmable gate array).
Intel’s AI portfolio on the Nervana platform
Intel is developing its AI portfolio on the Nervana platform. It will debut Knights Crest, which is Skylake Xeon joined with an FPGA, in 2H17. The company will also debut Lake Crest Xeon, its first discrete accelerator optimized for deep learning, delivering compute density at a high-bandwidth interconnect. Both chips will be built on Flexpoint architecture, which will improve parallelism by tenfold.
The company will introduce its next-generation Xeon Phi Knight’s Mill, optimized for AI, in 2H18. Intel aims to reduce the training time for AI by a factor of 100 by 2020.
A threat to Intel’s data center chips
Recently, Google (GOOG) released its custom ASIC (application-specific integrated circuit) TPU (tensor processing unit), which can perform computationally intensive tasks such as voice search and image processing 15 to 30 times faster than Intel and NVIDIA’s (NVDA) chips.
In the short term, TPUs could reduce the number of server chips required by Google in its server farms, thereby affecting Intel’s sales from cloud companies. As AI goes mainstream, Google could sell its TPUs and compete directly with Intel and Nvidia.
Arria 10 FPGA
Google uses a custom ASIC, which can deliver better performance than CPUs (central processing units) and GPUs (graphics processing units) but takes two years to build. However, in an interview with PC Magazine, Intel PSG (Programmable Solutions Group) general manager Dan McNamara stated that Intel is using FPGAs as they can be reprogrammed by a developer and deliver performance closer to that of ASICs, have low latency, and are highly parallel.
Intel’s Arria 10 FPGA. built on TSMC’s (TSM) 20-nm (nanometer) node, integrates the Xeon Broadwell processor and the Arria 10 FPGA. It can accelerate web-scale searches by up to ten times.
Intel has launched Stratix 10 using new EMIB (embedded multi-chip interconnect bridge) technology. EMIB helps Intel mix and match different nodes and cores. In Stratix 10, the core die is built on Intel’s 14-nm (nanometer) process and the transceivers are built on TSMC’s 16-nm process. EMIB technology will allow the company to mix high-performance 10-nm/14-nm CPUs and GPUs with low-power chips of a different node to achieve extreme optimization.