Artificial intelligence opportunities and Intel
So far, we’ve discussed how Intel (INTC) may be slowly losing ground in the process technology space. However, this does not deter the company from making R&D (research and development) efforts in the AI (artificial intelligence) space. AI is the future of technology—everything from home appliances to industrial machines could be backed by AI.
MarketsandMarkets estimates that the AI chipset market will grow at a CAGR (compound annual growth rate) of 62.9%, to reach $16.1 billion by 2022. Whereas Intel missed the mobile evolution, it does not want to miss the AI evolution. Therefore, it is testing different types of AI chipsets.
At present, NVIDIA (NVDA) leads the AI chipset race with its discrete GPUs (graphics processing units). Intel tried to develop its own discrete GPU based on x86 cores in its Larrabee project, but failed. Therefore, it removed the GPU option and has been looking at alternatives.
Intel’s trials in AI
To accelerate its AI efforts, Intel has gone ahead with several acquisitions of companies with some form of AI technology. In 2016, it acquired FPGA (field-programmable gate array) maker Altera and introduced FPGAs as data-center accelerators. It launched an add-on card, the Xeon Phi Knights Landing Coprocessor, for data-center acceleration but discontinued it in August 2017. Xeon Phi was Intel’s attempt to develop an HPC (high-performance computing) solution using its learning from the Larrabee project. However, Intel continues to offer other processors in the Xeon Phi family.
The Knights Landing Coprocessor has been replaced with the Nervana NNP (Neural Network Processor), which Intel developed in conjunction with Facebook (FB). Intel, which developed NNP chips after acquiring Nervana Systems in 2016, plans to ship the first chips by the end of 2017.
Intel has launched another accelerator using technology from Movidius, which it acquired in 2016. The new Movidius Myriad X is a VPU (vision-processing unit) that uses the company’s SHAVE (Streaming Hybrid Architecture Vector Engine) for deep neural network inferencing. This chip could help devices perform tasks such as image object recognition without connecting to the cloud server.
Overall, Intel now has five different AI platforms; FPGAs, the Xeon Phi, the Nervana NNP, the Myriad X, and its traditional Core processor. The Core processor still performs most AI tasks. It remains to be seen which platforms will be broadly adopted and which will be discontinued. Next, we’ll see what the Nervana NNP has to offer.