In the previous part of this series, we saw that Nvidia (NVDA) is making efforts to increase the adoption of AI (artificial intelligence) in enterprises. According to market intelligence firm Tractica, AI would become a part of every industry, replacing the existing business models with new ones that rely on deep learning.
Tractica forecasts that global AI revenue could grow from $643.7 million in 2016 to $36.8 billion by 2025.
Receive e-mail alerts for new research on NVDA:
Interested in NVDA?
Don’t miss the next report.
The increasing adoption of AI could see other companies such as Intel (INTC) and Advanced Micro Devices (AMD) create their own versions of accelerators. However, they would have to do a lot of work to develop a large ecosystem to equal the one that Nvidia has built over the years.
Intel aims to develop chips that deliver 100x better AI performance than that offered by GPUs by 2020. Intel would most likely use the FPGA (field programmable gate array) accelerator, which can deliver far better performance and energy efficiency than GPUs.
Intel is currently developing deep learning chips Lake Crest and Knights Crest in collaboration with Google (GOOG). The chip maker would use the technology of its recently acquired AI startup Nervana Systems to manufacture these chips and optimize them for Google’s TensorFlow AI software. Intel would test its Lake Crest chipset in fiscal 1H17 and make them available to customers by the end of fiscal 2017.
Nvidia’s president, Jen-Hsun Huang, does not count Intel as a competitor, as there is a difference in the FPGA and GPU technology. He explained that FPGA can deliver 10x better performance than a GPU, but they are meant for custom applications as opposed to general purpose use. On the other hand, Nvidia aims to create a general purpose parallel processor with its GPU technology that can handle all types of workloads.
While Intel is looking at FPGA technology, AMD is looking to compete with Nvidia with GPU accelerators. AMD launched a FirePro GPU for high-performance computing. However, the product was not well-received because AMD’s GPU software was designed for Microsoft’s (MSFT) Windows. Plus, server applications requiring data acceleration used Linux and other systems.
Returning to the AI space, AMD has launched its Radeon Open Compute Platform, which eliminates the need for developers to port their code to OpenCL to run on AMD GPUs. Although AMD has the required software in place for data center applications, its hardware still lags behind Nvidia’s hardware.
Mizuho Securities analyst Vijay Rakesh stated that the fast-growing AI market would give sufficient business to all three GPU suppliers: Intel, Nvidia, and AMD.
Next, we’ll look at another upcoming segment for Nvidia—Automotive.