NVIDIA’s AI partnerships
NVIDIA (NVDA) is looking to bring its GPUs (graphics processing units) to every data center in the world, and so it’s encouraging businesses and developers to adopt AI (artificial intelligence) and has entered into several partnerships.
One of the most important partnerships is with Baidu (BIDU), which would use NVIDIA’s Tesla GPU for training and inferencing. Baidu would also optimize it’s Paddle Paddle open source deep learning framework for Volta. NVIDIA has partnered with automaker Volkswagen to bring AI across their organization.
Under its HGX Partner Program, NVIDIA has partnered with hyperscale vendors Foxconn, Inventec, Quanta, and Wistron to provide early access to its HGX reference architecture, its GPU computing technologies, and design guidelines. This would reduce the time required to start GPU production after the contract win.
NVIDIA and Walmart in retail
NVIDIA is also expanding AI applications into several industries, including manufacturing, healthcare, financial services, and industries. Retailer Wal-Mart Stores (WMT) is adopting NVIDIA GPUs as it looks to build out its cloud network and compete with Amazon.com’s (AMZN) Web Services.
Amazon Web Services’ aggregated data is in one place, which helps its retail business make processes more cost-efficient. The retailer passed on these cost efficiencies to merchants via cheap back-end IT (information technology) services.
NVIDIA and GE’s Avitas Systems in industrial
Drones and other unmanned robots currently conduct industrial inspections, collecting images and data through cameras and sensors. Avitas Systems, a General Electric (GE) Venture company, plans to use NVIDIA’s DGX-1 supercomputer to build a deep neural network to scan the data collected from drones and detect the most minute changes or inconsistencies in industrial operations. These GPUs would quickly and consistently identify defects in industrial equipment before it becomes dangerous.
NVIDIA GPU Cloud
NVIDIA has also announced its GPU Cloud platform, which aims to provide developers with several software tools to train their own AI (artificial intelligence) systems. The company aims to encourage developers to experiment with AI, though this would put NVIDIA in competition with its cloud customers that offer GPU cloud computing.
Competition from ASIC and TPU
Many analysts are concerned about the growing competition NVIDIA is facing from ASICs (application-specific integrated circuit) and TPUs (tensor processing unit). In the fiscal 2Q18 earnings call, Jen-Hsun Huang stated that TPUs are ASICs. NVIDIA’s GPUs are also TPUs, but they perform a lot more functions than TPUs.
At its analyst day, NVIDIA stated that data center would be a $30-billion opportunity by 2020, with inference at $15 billion, training at $11 billion, and HPC at $4 billion.