Technology Feast: Nvidia’s GPU Technology Conference Serves Up Appetizing Possibilities
October 5, 2009

Technology Feast: Nvidia’s GPU Technology Conference Serves Up Appetizing Possibilities

By Barbara Robertson 

Have you ever found a small cafe, almost by chance, where you experienced a remarkable and surprising meal? And, not only was the food wonderful, but while there, you discovered that someone you really had wanted to talk to but hadn’t had a chance was sitting with a small group at the next table? You pulled your tables together. You soon realized that everyone shared a common interest, but arrived at the locus from different angles. The excitement grew as you exchanged stories, ideas, made plans. New people entered the cafe and joined the discussion, moving from table to table, talking and eating. The host brought more food, each course better than the next, but deftly stepped back to let everyone enjoy the feast without interference. After you left, you thought, “I can’t wait to tell everyone I know about this experience.”
It felt like that in the Fairmont Hotel in San Jose last week, where 1500 brilliant people--CEOs, CTOs, researchers, scientists, programmers, inventors, PhD candidates, entrepreneurs, and financiers--from universities, established companies, visual effects studios, animation studios, software vendors, venture capital companies, and approximately 60 start-ups attended sessions, shared ideas, announced products, and learned from each other at Nvidia’s GPU Technology Conference (GTC), September 30 to October 2. Attendees arrived from 600 companies and 150 universities and research institutions in 40 countries. Ninety percent of the attendees were engineers, scientists, or technical leaders, according to Dan Vivoli, senior vice president at Nvidia, who noted that the company had to shut down registration two weeks before the conference, having already allowed several hundred more people than the company had planned to register.

Augmented reality from Turin’s SEAC02 at the GPU conference.
Augmented reality from Turin’s SEAC02 at the GPU conference.

The first things you saw when you walked into the conference were the posters lining the hallways describing GPU solutions like: “Optimized CUDA Implementation of a Navier-Stokes Based Flow Solver for the 2D Lid Driven Cavity”--which immediately sent geek-o-meter dials into the red zone.

LucasFilm CTO Richard Kerris speaks at the keynote on day three of the conference.
LucasFilm CTO Richard Kerris speaks at the keynote on day three of the conference.

The second was the buzz from people milling in the hallways before and after sessions, and gathering in the ballroom. Booths from start-up companies ringed the outside of the ballroom. In the middle, Nvidia set tables for box lunches and catered dinners. The buzz was palpable.

“It makes your propeller beanie spin,” said Andy Hendrickson, chief technology officer at Walt Disney Animation Studios, of the conference. The phrase most often heard? “This is really exciting.”

Nvidia CEO and founder Jen-Hsun Huang opened the conference with two big announcements: the Fermi chip and a developer’s “ecosystem,” NEXUS, the combination providing jet fuel for every propeller head in the audience. Three years in the making, Fermi has three billion transistors, 512 CUDA cores (twice the current generation chip, the Ion), eight times the peak double-precision compute performance, IEEE 754-2008, ECC memory, and support for Fortran, C++, C, OpenCL, and DirectCompute. With the current-generation Ion chip already demonstrating 20, 40, 100 times speed-ups, you could almost see the attendees’ mental wheels turning as they pondered the implications. “I believe this is beginning of the GPU computing revolution,” Huang said, “and it’s going to be the most important processor of this decade.”

Nvidia plans to release Fermi later this year. CG professionals might first feel the power in digital video pipelines, with Fermi feeding HD-resolution images with blazing speed. Recently, at IBC, Nvidia introduced its Quadro Digital Video Pipeline, which, using four GPUs, can process live feeds from four simultaneous HD cameras. At the GPU conference, Nvidia demonstrated the HP’s tiny new $399 HP 311 netbook, which can stream HD video and play 3D games, thanks to the Ion chip. Fermi promises to be three times faster.

Nvidia organized the conference into tracks for developers, researchers, and general sessions, with a separate track for the 60 start-up companies to strut their stuff in front of other attendees, analysts, and venture capitalists. The company provided less than a quarter of the content; more than 75 percent came from the community.

In addition to Nvidia’s Jen-Hsun Huang, Harvard University’s Hanspeter Pfister, a computer science visionary and one of the world’s leading computer graphics researchers, and Richard Kerris, CTO at Lucasfilm, gave keynote addresses. Pfister talked about how GPUs help scientists map the human brain, human vision, the beginning of the universe.

Kerris showed, through film clips and breakdowns, how dramatically the GPU has accelerated visual effects work, particularly simulations. Two bits of news stand out from Kerris’ keynote: First, ILM is now building a GPU-based renderfarm, and second, the studio sent more than 50 people to the conference.

Kerris brought Christopher Horvath onto the stage during the keynote to show the rigid-body sim and fire-sim systems he helped design to break apart the pyramid in Transformers and control a firestorm for Harry Potter. “This would take 13 hours per frame in a machine with eight processors,” Horvath said of the fire simulation. “But with the GPU, it took 10 seconds, and artists could interact with it.”

Horvath created a demo of fluid simulation with parallel processing using the audience. First, he stated the rule: When someone next to you stands up, you stand; when they sit, you sit. Then, he asked a column of people at the far side of the room to stand. That caused a ripple of people standing up that spread across the room. When he asked the column of people to sit, the wave changed shape. Next, he created a radial fluid simulation--like a pebble dropping into a pool--by having all the people named “Eli”--there were only two--stand and then sit.

Horvath also took part in my favorite session of the day, a panel on the GPU revolution in film production, a biased opinion on my part because I moderated the session. Also on the panel were Arthur Shek, a technology manager at Disney, Rob Bredow, CTO at Sony Pictures Imageworks, and Thad Beier, head of CG Digital Domain. Even though it was the last session on the last day, the these incredibly smart men held the audience, packed into the room, rapt as they explained how each studio has used the GPU and plans to use the GPU.

“We have a GPU renderfarm in our basement now,” Shek said. For Bolt, by porting a [Pixar] RenderMan shader for an eye rig to the GPU, the artists could interact with a character’s rendered eye. “It’s a tiny thing that had a huge impact,” he added. The studio is also using the GPU to accelerate sprite rendering and particle simulations. “We’re seeing 10 to 40 times the speed-up.”

Bredow noted that Imageworks also saw improvements in that studio’s sprite-based renderer, Splat, once they had ported it to the GPU. “We saw renders in 15 to 25 seconds that would normally take 15 to 25 minutes,” he said, while showing Splat for the first time publicly. “It’s an order of magnitude faster.” In other areas, such as level set solvers, the studio sees speed increases of 20 to 30 times.

Beier noted that code written for the GPU still tends to be fragile [something Nvidia no doubt hopes the new announcements address], and the studio hasn’t yet implemented GPU-based software.

“For us, there are only three speeds,” Beier said. “Overnight, get a cup of coffee, and interactive. Speed-ups matter only if they push you from one to another. So, we’re looking at making the switch in a year. Of course, I’ve been saying that for five years.”

Beier noted, however, that The Foundry is looking seriously at GPU acceleration in a next version of Nuke, which originated at Digital Domain. “Now that OpenCL is available, they have more confidence in investing in GPUs,” he said. One possibility: abstracting the compositing software into atomic operations.

Indeed, everyone noted that switching into heterogeneous computing--that is, using GPUs and CPUs in combination--would require new mind-sets. “It isn’t just re-coding,” Horvath says of the GPU. “It’s a specialized device.” A specialized device that can make a big impact.

“It feels like a big change in the paradigm,” says Dan Candela, director of technology at Disney. He cautioned that he’d re-evaluate sometime later to be sure the excitement of the moment wasn’t Kool-Aid’ing his thoughts, but added, “It feels like a big wave is coming.”

Nvidia has already announced a second GPU conference scheduled for September 2010. The trick will be finding a way to keep the same level of enthusiastic interaction among attendees as word spreads and the conference can’t help but grow. “The tap is open now,” says Vivoli.

Webcasts of the keynote addresses and day-by-day wrap-ups are online at www.nvidia.com, a company blog is here http://blogs.nvidia.com/ntersect/, and a daily wrap-up at http://blogs.nvidia.com/gtc/. The company promises to add audio/screencast recordings of most sessions, along with presentations and posters online, in mid-October.