GTC: Deep Learning Tools
March 18, 2015

GTC: Deep Learning Tools

In front of a packed audience at the Nvidia GPU Technology Conference, CEO and co-founder Jen-Hsun Huang introduced what he called three new technologies that will fuel deep learning. One was the GeForce GTX Titan X, which he called the most powerful processor ever built for training deep neural networks.

This advanced GPU features a steep increase in the number of CUDA cores (3,072) – about a third more than the GeForce GTX 980. It also sports 12GB of onboard memory and carries a price tag of $999. 

Analyst Jon Peddie of Jon Peddie Research called it “a gaming card.” Show-goers at the recent Game Developers Conference got a sneak peak at the GPU as it drove a Hobbit-related VR experience. 

The Titan X is built on the Nvidia Maxwell GPU architecture, delivering twice the performance and double the power efficiency of its predecessor. The single-precision floating-point performance will reach 7T FLOP. In double-precision floating-point performance, the card can reach 200G FLOP.

While the Titan X is the company’s new flagship GeForce gaming GPU, Huang contends it is also uniquely suited for deep learning,” a concept the CEO continued to emphasize during the conference keynote. 

Emphasizing the power of the new GPU, Huang pointed to neural analysis data. On AlexNet, an industry-standard model, Titan X took less than three days to train the model using the 1.2 million-image ImageNet dataset. In comparison, it took 40 days for a 16-core CPU to do so.

In other news related to deep learning research, Nvidia revealed the DIGITS Deep Learning GPU Training System, a software application that makes it easier for data scientists and researchers to quickly create high-quality deep neural networks. 

What is deep learning? A rapidly growing segment of artificial intelligence, it is computing innovation for areas as diverse as advanced medical and pharmaceutical research to fully autonomous, self-driving cars.

Using deep neural networks to train computers to teach themselves how to classify and recognize objects can be an onerous, time-consuming task. The DIGITS Deep Learning GPU Training System software gives users what they need from start to finish. It’s the first all-in-one graphical system for designing, training, and validating deep neural networks for image classification. It steps users through the setup process and helps them configure and train deep neural networks, so they can focus on the research and results.

The user interface makes it easy to prepare and load training datasets on a local system or the Web. 

Lastly, Huang showed off the DIGITS DevBox, introduced as the fastest deskside deep learning appliance. Powered by four Titan X GPUs, the box is built to handle deep learning research. According to Nvidia, every component of the DevBox – from memory to I/O to power – has been optimized to deliver highly efficient performance. It comes pre-installed with all the software data scientists and researchers require to develop their own deep neural networks. This includes the DIGITS software package, the most popular deep learning frameworks – Caffe, Theano, and Torch – and cuDNN 2.0, NVIDIA’s robust GPU-accelerated deep learning library.

A hot offer, the box is energy-efficient, quiet, and cool-running. It will fit under a desk and plugs into an ordinary wall socket. The selling price is $15,000.