uploads/2017/05/A10_Semiconductors_NVDA_GPU-performance-against-TPU-1.png

A Look at the Performances of NVIDIA’s Pascal-Based Tesla GPUs

By

Updated

NVIDIA’s AI milestones

NVIDIA (NVDA) has been supporting the adoption of AI (artificial intelligence) beyond the cloud and into supercomputing and other industries.

The company’s Tesla GPU (graphics processing unit) has achieved computation milestones, showcasing its ability to process large amounts of data in less time and using less power and space.

Article continues below advertisement

NVIDIA challenges Google’s report on TPU performance

Recently, Google (GOOG) released its custom ASIC (application-specific integrated circuit) TPUs (tensor processing unit). In a paper, Google tested its TPUs’ performances against Intel’s (INTC) Haswell microprocessor and NVIDIA’s K80 with the following results:

  • In terms of the number of operations performed per second, TPU was 14.5x faster than Intel’s chip and 13.2x faster than NVIDIA’s chip.
  • In terms of performance per watt, TPU was 17–34x better than Intel’s chip and 25–29x better than NVIDIA’s chip.

In response, NVIDIA’s CEO, Jen-Hsun Huang, issued a blog post in which he stated that Google was comparing its TPU with NVIDIA’s obsolete K80 GPU and not with its current generation P40 GPU. NVIDIA claims that its P40 offers 26x more inference performance than the K80.

NVIDIA’s test also shows that Google’s TPUs are more power-efficient than its GPUs because Google’s TPUs are custom-built ASICs designed to handle specific workloads, whereas NVIDIA’s GPUs are designed to handle a wider range of workloads.

NVIDIA’s GPUs are used by most Fortune 500 companies that lack in-house expertise in AI software, whereas Google’s TPUs are used by companies that have this expertise. The general-purpose use of NVIDIA’s GPUs is highlighted by the milestones it’s achieved in different sectors.

NVIDIA’s Tesla GPUs achieve new computational milestone in oil and gas

In February 2017, ExxonMobil (XOM) stated that it had achieved the computational milestone of simulating one billion cell models of a reservoir across 717,000 processors. The company used the National Center for Supercomputing Applications’ Blue Waters supercomputer. Blue Waters has 22,000 servers, with 32 processors in each server.

A few months later, Stone Ridge Technology achieved a similar computational performance at one-tenth the power and one-hundredth the space. The company simulated one billion reservoir cells in 92 minutes on its Echelon petroleum reservoir simulation software. The simulation used IBM’s (IBM) 60 Power 8 CPUs (central processing units) and NVIDIA’s 120 TeslaP100 GPUs. The IBM servers were linked to GPUs using NVIDIA’s NVLink, which further improved the speed.

IBM is using NVIDIA’s GPUs to sell its Power 8 processors in HPC (high-performance computing) areas and to gain some share in the server processor market. 

After having achieved milestones in the data center space, NVIDIA is now looking to bring AI to the edge. We’ll take a look at this strategy in the next article.

Advertisement

More From Market Realist