X
<

Intel's 2017 Product Roadmap: A Journey towards Data

PART:
1 2 3 4 5 6 7 8 9 10 11
Part 6
Intel's 2017 Product Roadmap: A Journey towards Data PART 6 OF 11

Where Does Intel Stand in the Artificial Intelligence Market?

Intel in the Artificial Intelligence market

According Jackdaw Research analyst Jan Dawson, although Intel (INTC) is committed to making it big in the AI (Artificial Intelligence) market, it is a little late. Also, it does not have the same power as it has in the PC (personal computer) and server market to set standards. However, it is speeding up its AI efforts and has several new products lined up for release in 2017 and 2018.

Its AI portfolio comprises the standard Xeon, the higher-performance Xeon Phi, Xeon coupled with Lake Crest, and Xeon coupled with the Arria 10 FPGA (field-programmable gate array).

Where Does Intel Stand in the Artificial Intelligence Market?

Interested in INTC? Don't miss the next report.

Receive e-mail alerts for new research on INTC

Success! You are now receiving e-mail alerts for new research. A temporary password for your new Market Realist account has been sent to your e-mail address.

Success! has been added to your Ticker Alerts.

Success! has been added to your Ticker Alerts. Subscriptions can be managed in your user profile.

Intel’s AI portfolio on the Nervana platform

Intel is developing its AI portfolio on the Nervana platform. It will debut Knights Crest, which is Skylake Xeon joined with an FPGA, in 2H17. The company will also debut Lake Crest Xeon, its first discrete accelerator optimized for deep learning, delivering compute density at a high-bandwidth interconnect. Both chips will be built on Flexpoint architecture, which will improve parallelism by tenfold.

The company will introduce its next-generation Xeon Phi Knight’s Mill, optimized for AI, in 2H18. Intel aims to reduce the training time for AI by a factor of 100 by 2020.

A threat to Intel’s data center chips

Recently, Google (GOOG) released its custom ASIC (application-specific integrated circuit) TPU (tensor processing unit), which can perform computationally intensive tasks such as voice search and image processing 15 to 30 times faster than Intel and NVIDIA’s (NVDA) chips.

In the short term, TPUs could reduce the number of server chips required by Google in its server farms, thereby affecting Intel’s sales from cloud companies. As AI goes mainstream, Google could sell its TPUs and compete directly with Intel and Nvidia.

Arria 10 FPGA

Google uses a custom ASIC, which can deliver better performance than CPUs (central processing units) and GPUs (graphics processing units) but takes two years to build. However, in an interview with PC Magazine, Intel PSG (Programmable Solutions Group) general manager Dan McNamara stated that Intel is using FPGAs as they can be reprogrammed by a developer and deliver performance closer to that of ASICs, have low latency, and are highly parallel.

Intel’s Arria 10 FPGA. built on TSMC’s (TSM) 20-nm (nanometer) node, integrates the Xeon Broadwell processor and the Arria 10 FPGA. It can accelerate web-scale searches by up to ten times.

Stratix 10

Intel has launched Stratix 10 using new EMIB (embedded multi-chip interconnect bridge) technology. EMIB helps Intel mix and match different nodes and cores. In Stratix 10, the core die is built on Intel’s 14-nm (nanometer) process and the transceivers are built on TSMC’s 16-nm process. EMIB technology will allow the company to mix high-performance 10-nm/14-nm CPUs and GPUs with low-power chips of a different node to achieve extreme optimization.

X

Please select a profession that best describes you: