NVIDIA GPU Inference Increases Significantly
March 27, 2018

NVIDIA GPU Inference Increases Significantly

GPU TECHNOLOGY CONFERENCE — NVIDIA announced a series of new technologies and partnerships that expand its potential inference market to 30 million hyperscale servers worldwide, while dramatically lowering the cost of delivering deep learning-powered services.
Speaking at the opening keynote of GTC 2018, NVIDIA founder and CEO Jensen Huang described how GPU acceleration for deep learning inference is gaining traction, with new support for capabilities such as speech recognition, natural language processing, recommender systems, and image recognition — in datacenters and automotive applications, as well as in embedded devices like robots and drones.

NVIDIA announced a new version of its TensorRT inference software, and the integration of TensorRT into Google’s popular TensorFlow framework. NVIDIA also announced that Kaldi, the most popular framework for speech recognition, is now optimized for GPUs. NVIDIA’s close collaboration with partners such as Amazon, Facebook and Microsoft make it easier for developers to take advantage of GPU acceleration using ONNX and WinML.

“GPU acceleration for production deep learning inference enables even the largest neural networks to be run in real time and at the lowest cost,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “With rapidly expanding support for more intelligent applications and frameworks, we can now improve the quality of deep learning and help reduce the cost for 30 million hyperscale servers.”

TensorRT, TensorFlow Integration 
NVIDIA unveiled TensorRT 4 software to accelerate deep learning inference across a broad range of applications. TensorRT offers highly accurate INT8 and FP16 network execution, which can cut datacenter costs by up to 70 percent.

TensorRT 4 can be used to rapidly optimize, validate and deploy trained neural networks in hyperscale datacenters, embedded and automotive GPU platforms. The software delivers up to 190x faster deep learning inference compared with CPUs for common applications such as computer vision, neural machine translation, automatic speech recognition, speech synthesis and recommendation systems.

To further streamline development, NVIDIA and Google engineers have integrated TensorRT into TensorFlow 1.7, making it easier to run deep learning inference applications on GPUs.

Rajat Monga, engineering director at Google, said, “The TensorFlow team is collaborating very closely with NVIDIA to bring the best performance possible on NVIDIA GPUs to the deep learning community. TensorFlow’s integration with NVIDIA TensorRT now delivers up to 8x higher inference throughput (compared to regular GPU execution within a low-latency target) on NVIDIA deep learning platforms with Volta Tensor Core technology, enabling the highest performance for GPU inference within TensorFlow.”

NVIDIA has optimized the world’s leading speech framework, Kaldi, to achieve faster performance running on GPUs. GPU speech acceleration will mean more accurate and useful virtual assistants for consumers, and lower deployment costs for datacenter operators.

Broad Industry Support 
Developers at a wide spectrum of companies around the world are using TensorRT to discover new insights from data and to deploy intelligent services to businesses and consumers.

NVIDIA engineers have worked closely with Amazon, Facebook and Microsoft to ensure developers using ONNX frameworks such as Caffe 2, Chainer, CNTK, MXNet and Pytorch can now easily deploy to NVIDIA deep learning platforms.

Markus Noga, head of Machine Learning at SAP, said, “In our evaluation of TensorRT running our deep learning-based recommendation application on NVIDIA Tesla V100 GPUs, we experienced a 45x increase in inference speed and throughput compared with a CPU-based platform. We believe TensorRT could dramatically improve productivity for our enterprise customers.”

Nicolas Koumchatzky, head of Twitter Cortex, said, “Using GPUs made it possible to enable media understanding on our platform, not just by drastically reducing media deep learning models training time, but also by allowing us to derive real-time understanding of live videos at inference time.”
Microsoft also recently announced AI support for Windows 10 applications. NVIDIA partnered with Microsoft to build GPU-accelerated tools to help developers incorporate more intelligent features in Windows applications.

NVIDIA also announced GPU acceleration for Kubernetes to facilitate enterprise inference deployment on multi-cloud GPU clusters. NVIDIA is contributing GPU enhancements to the open-source community to support the Kubernetes ecosystem.

In addition, MathWorks, makers of MATLAB software, today announced TensorRT integration with MATLAB. Engineers and scientists can now automatically generate high-performance inference engines from MATLAB for the NVIDIA DRIVE, Jetson, and Tesla platforms.

Inference for the Datacenter 
Datacenter managers constantly balance performance and efficiency to keep their server fleets at maximum productivity. NVIDIA Tesla GPU-accelerated servers can replace several racks of CPU servers for deep learning inference applications and services, freeing up precious rack space and reducing energy and cooling requirements.

Inference for Self-Driving Cars, Embedded 
TensorRT can also be deployed on NVIDIA DRIVE autonomous vehicles and NVIDIA Jetson embedded platforms. Deep neural networks on every framework can be trained on NVIDIA DGX systems in the datacenter, and then deployed into all types of devices — from robots to autonomous vehicles — for real-time inferencing at the edge.
With TensorRT, developers can focus on developing novel deep learning-powered applications rather than performance tuning for inference deployment. Developers can use TensorRT to deliver lightning-fast inference using INT8 or FP16 precision that significantly reduces latency, which is vital for capabilities like object detection and path planning on embedded and automotive platforms.