Intel’s Data Center Group
Intel (INTC) has reduced its spending in the personal computer space and is making efforts in the networking space to boost revenue for its CCG (Client Computing Group). In its DCG (Data Center Group), the company is reducing its efforts in enterprise servers to focus on the cloud, hyperscale computing, communications, and AI (artificial intelligence).
In fiscal 2016, DCG revenue rose 7.8% YoY (year-over-year) to $17.0 billion, with adjacency, cloud, and communications provider revenue rising 21.0%, 24.0%, and 19.0%, respectively. Although revenue is increasing, DCG’s profit is decreasing. The growth segments—Ethernet controllers, Omni-Path fabric controllers and switches, and network ASICs (application-specific integrated circuits)—require huge investments.
Enterprise products’ contribution towards DCG revenue has fallen below 50%, whereas the cloud’s contribution has risen. Future DCG growth could be driven by networking and storage, where Intel’s market share is very low .
Intel’s AI strategy
Intel, now focusing on AI, has created an Artificial Intelligence Products Group, which brings together resources to build a strong AI portfolio. The group will create an applied AI research lab to develop future generations of AI products ranging from data centers to network edge devices. The group will be headed by Nervana CEO Naveen Rao.
The Register reported that the AI group’s new chief technology officer, Amir Khosrowshahi, describes AI “as involving deep statistical analysis of very closely-observed events so that we can infer likely outcomes with satisfying precision.” Working along these lines, Intel acquired AI startups Nervana and Movidus and FPGA (field-programmable gate array) maker Altera.
Though Intel’s modern Xeon chip can analyze a large amount of data, dedicated hardware with a matching software ecosystem can accelerate this process. Intel’s Xeon chip, combined with an Altera FPGA, offers just that. Altera’s FPGAs are one year behind Xilinx’s (XLNX) FPGAs, but that doesn’t bother Intel as its main aim is to integrate FPGAs in its server chips to deliver AI capabilities.
Altera is developing libraries for common AI tasks, eliminating the need to write codes for popular tasks. It is also developing tools to make it easy for developers to write codes for FPGAs. Meanwhile, Nervana has provided Intel with deep learning chips.
While NVIDIA (NVDA) and Advanced Micro Devices (AMD) are using GPU accelerators to tap into AI, Intel is using FPGA accelerators as they can deliver better performance. Also, they can be used across a variety of chips ranging from high-performance server chips to low-performance IoT (Internet of Things) chips.
Intel is integrating the aforementioned intellectual property to create a comprehensive AI portfolio. We’ll look at this portfolio in the next part of the series.