uploads///Series  A

What Exactly Is the Xilinx-AWS Partnership Targeting?

By

Dec. 23 2016, Updated 7:38 a.m. ET

What’s Xilinx shooting for with RAS?

Xilinx (XLNX) has spent big money on developing RAS (reconfigurable acceleration stacks) in order to accelerate its adoption of FPGAs (field programmable gate arrays) beyond HPC (high-performance computing) to hyperscale companies like Google (GOOG) and Microsoft (MSFT).

Xilinx is already seeing the results, with its FPGAs being used by China’s hyperscale company, Baidu (BIDU), and DeePhi for machine learning. Recently, Amazon (AMZN) Web Services announced that it would use Xilinx’s 16 nm UltraScale Plus FPGA for its EC2 F1 instance (elastic compute cloud FPGA) services.

Article continues below advertisement

How is FPGA different from GPU and ASIC?

Amazon preferred to use FPGAs instead of GPUs (graphics processing units) and ASICs (application-specific integrated circuits) because FPGAs can be programmed and reprogrammed multiple times to handle different types of complex workloads. On the other hand, GPUs can only be used for general workloads and ASICs are already pre-programmed for only one workload.

Programming an FPGA is a complex task and can be time-consuming, and so companies have been reluctant to use FPGAs, which has slowed the technology’s adoption in the mainstream data center market. Amazon, notably, is looking to bridge this gap with its EC2 F1 service.

What does Amazon plan to do with Xilinx’s FPGAs?

Amazon Web Services is offering FPGA-enabled nodes on its EC2 cloud. In a blog post, Amazon Web Services Chief Evangelist Jeff Barr wrote, “We are giving you the ability to design your own logic, simulate and verify it using cloud-based tools, and then get it to market in a matter of days.”

A user can program the FPGA as many times as needed using the cloud tools such as the FPGA Developer AMI (Amazon machine image) or HDK (hardware developer kit). The only cost the user would incur is for the F1 computing capacity used per hour, with no upfront payments or long-term commitments.

Initially, the service will be offered in two instance sizes, including up to eight FPGAs per instance. Each FPGA in these instances would feature local 64 gigabytes DDR4 ECC protected memory, a dedicated PCIe (peripheral component interconnect express) x16 connection, ~2.5 million logic elements, and ~6,800 digital signal processing engines. This should help bring FPGA to the mainstream data center market.

Specifically, if the EC2 F1 instance service proves to be a success, it could add high-level tools such as OpenCL or the Xilinx RAS to target early adopters and developers of FPGAs.

Meanwhile, Microsoft has adopted rival Intel’s (INTC) FPGAs in its Catapult System for computing and network acceleration. However, these FPGAs are only meant for an internal purpose.

In the next part of this series, we’ll discuss in more detail what the Amazon win means for Xilinx—and the new opportunity it provides.

Advertisement

More From Market Realist

  • CONNECT with Market Realist
  • Link to Facebook
  • Link to Twitter
  • Link to Instagram
  • Link to Email Subscribe
Market Realist Logo
Do Not Sell My Personal Information

© Copyright 2021 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.