X
<

Intel's 2017 Product Roadmap: A Journey towards Data

PART:
1 2 3 4 5 6 7 8 9 10 11
Part 7
Intel's 2017 Product Roadmap: A Journey towards Data PART 7 OF 11

Behind Intel’s Plans to Make Artificial Intelligence Mainstream

Intel aims to bring make artificial intelligence mainstream

In the previous part of this series, we saw that Intel (INTC) is developing AI (artificial intelligence) hardware using Nervana and other deep-learning platforms. In the GPU1-dominated AI market, Intel is looking to bring its FPGA2-backed Xeon CPUs3 to mainstream users by engaging with open-source frameworks such as Caffe2 and Chainer.

Intel is looking to run these open-source frameworks on its hardware. It is also using these frameworks to highlight that developers can easily use AI with its Xeon CPUs.

Behind Intel&#8217;s Plans to Make Artificial Intelligence Mainstream

Interested in INTC? Don't miss the next report.

Receive e-mail alerts for new research on INTC

Success! You are now receiving e-mail alerts for new research. A temporary password for your new Market Realist account has been sent to your e-mail address.

Success! has been added to your Ticker Alerts.

Success! has been added to your Ticker Alerts. Subscriptions can be managed in your user profile.

Caffe2

Intel has participated in Facebook’s (FB) new deep-learning, open-source, cross-platform framework, Caffe2, which aims to optimize deep learning for cloud and mobile environments. Caffe2 has a production-ready, lightweight, high-performance, scalable, deep-learning framework that focuses on portability.

In its blog, Intel stated that its MKL (Math Kernel Library) functions would boost Caffe2’s inference performance on CPUs. It has also incorporated its Skylake Xeon processors to improve Caffe2’s performance. The Skylake architecture is built on a larger 512-bit wide AVX4 engine, which provides a significant performance improvement from the previous Haswell/Broadwell architecture built on a 256-bit wide AVX2.

Chainer

Intel has partnered with Japan’s (EWJ) Preferred Networks’ open-source framework, Chainer, which mainly uses NVIDIA’s (NVDA) GPUs for its AI workloads. Under the deal, Chainer will support Intel’s Xeon alongside NVIDIA’s platform, CUDA.

Intel could engage with more such frameworks to gain market share and become the preferred architecture for AI among developers. In an interview with El Reg, Intel Accelerator Workload Group general manager Barry Davis stated that most AI workloads would come from big cloud companies such as Google, Amazon, and Microsoft. When these companies launch AI-as-a-service, Intel wants to stand alongside NVIDIA as the preferred support.

OpenStack

While Intel is increasing its engagement in open-source frameworks for AI, the company reduced its funding support for the OSIC (OpenStack Innovation Center). OpenStack provides companies with software tools to build cloud infrastructure and OSIC aims to encourage more enterprises to use OpenStack.

Several vendors, including Hewlett Packard Enterprise (HPE) and Cisco Systems (CSCO), have pulled back their OpenStack projects in the last six months. However, the adoption of OpenStack private clouds is growing at a CAGR (compound annual growth rate) of 39%, and this market is expected to reach $5.7 billion by 2020, according to 451 Research. In addition to accelerators, processors, and software, Intel is developing memory and storage that supports AI, which we’ll discuss in the next part.

  1. graphics processing unit
  2. field-programmable gate array
  3. central processing units
  4. Advanced Vector Extension
X

Please select a profession that best describes you: