Promises, Promises
Barbara Robertson
August 16, 2011

Promises, Promises

Looking forward with SIGGRAPH 2011
First, a word about Vancouver. What a dazzling location! The conference center is on the water – the harbor, actually, where the cruise ships dock and sea planes take off. The glassy buildings downtown reflect each other. Totem pole sculptures reflect the native culture. Many of the buildings have plants growing on their roofs. Fountains and man-made streams are everywhere. It’s all so green and beautiful. And I believe the location raised people’s spirits. “I could move here,” were perhaps the most common words heard at the show.


View from the hotel

But, back to the show. Nvidia opened SIGGRAPH 2011 for me Monday evening with a look at future technology in a press briefing at the convention center. By the end of the week, I realized this was prophetic: I thought the promise of future technology was the overriding, ongoing theme for SIGGRAPH 2011. Not the promise of vaporware, but an early look at promising new technology that we would likely see this fall.

For its part, Nvidia plans to put GPUs in the cloud with Project Monterey, whose official name is now Quadro Virtual Graphics Technology. And, the hardware vendor introduced its Project Maximus, which will give users running “Maximus-powered” workstations control over which graphics, modeling, and rendering processes they send to Quadro graphics cards and which to the Tesla GPU. It all spells speed, and speed was the second theme.



The Studio

Right after breakfast Tuesday morning, Industrial Light & Magic and Sony Pictures Imageworks announced the upcoming release of Alembic 1.0, which adds automatic data de-duplication. This means that when the software recognizes repeated shapes in complicated geometry, it writes only a single instance to disk. It happens automatically. The result is that Alembic reduces the amount of disk space needed and speeds write and read performance. ILM and Imageworks already use the code in production and are seeing huge advantages.

Imageworks CTO Rob Bredow compared read and write times for a 217-frame, 95 second shot from Smurfs using OBJ files and Alembic files. Write time with OBJ: 1.5 hours, with Alembic: 3.5 minutes; read time for one frame using the OBJ file format: 65 seconds; with the Alembic file format: one-tenth of a second. ILM’s Tommy Burnette, who heads the global pipeline at Lucasfilm, had equally outstanding numbers. The character Iron Man stored in ILM’s proprietary file format took 8G of disk space; in Alembic, 73M. The Alembic computer graphics interchange format focuses on efficiently storing and sharing animation and visual effects scenes across multiple software applications. The open-source code is ready now. The promise is that software vendors will implement it. Luxology was first up. Autodesk, the Foundry, Pixar, Side Effects Software, and Nvidia plan support in the next versions of their software.


Disney

More promising technology: Vicon’s Mobile Mocap, a snazzy helmet fitted with four tiny infrared cameras for capturing facial animation data at 1280 x 720 resolution (720p) at 60 frames per second. A wearable, iPhone-sized, solid-state “station” handles compression on the fly. The company has no release date yet, though, for this future tech.

Tweak Software showed a ton of hugely practical features, including support for stereo 3D TV reviews, support for ARRI Alexa and RED formats, and a presentation mode for its popular RV image and sequence viewer, now in Version 3.12, scheduled for a fall release. With the presentation mode, users can display a full-screen view on one monitor while controlling playback from another.


Emerging Technologies

Shotgun Software introduced a new navigation system that features a simple visual user interface for data organization and exploration within its production tracking software. Also included in a planned September release of Shotgun 3.0 is integration with Rush, which updates Shotgun automatically with each render, and CineSync Online for review sessions. The software already integrates RV, Deadline, and Qube.

At Autodesk, Maurice Patel, industry manager for M&E, announced a five-year license of Disney’s XGen technology. Disney look development supervisor Chuck Tappan explained that the studio first used the technology to generate arbitrary primitives on a surface for Chicken Little, having published a SIGGRAPH paper in 2003. (Tappan co-authored that paper, titled “XGen: arbitrary primitive generator,” with Thomas Thompson II and Ernest Petti.) Since then, Disney Animation and Pixar Animation Studios have used the “many from few” technology for several feature films, including, notably, to create and art-direct Rapunzel’s golden locks in Tangled. Autodesk has not yet announced when the software will find its way into the company’s many products.


Autodesk Maurice Patel

On the show floor, the operative words were “cloud,” “rendering,” “motion capture,” and “3D printers.” Black Sky, for example, had two booth babes help attract people who might otherwise have missed learning about their renderfarm hardware, which packs lots of CPUs into a bitty rack, and their render-in-the-cloud solution scheduled for a winter release.

But my favorite was a discovery in the back corner: a large booth, often surprisingly uncrowded. Tandent Vision Science introduced a beta version of its Trillien software, which automagically separates illumination from an image and removes shadows. Here’s how they describe the software: an illumination preprocessor that solves the fundamental problem of recognizing the stability of surfaces under varying illumination. As one of the scientists in the booth put it, “Once you think of images as light and matter, you can do all sorts of things.” The company plans to offer plug-ins for Adobe’s Photoshop and the The Foundry’s Nuke. They’re looking for input now. If you have images that you want to turn into texture maps, faces from which you wish to remove shadows, or photographs that you want to turn into geometry, you just might want to talk with these folks.


Nvidia Project Maximus

So, that’s it for the techie side. Other things I loved this year: Listening to Turner Whitted at the Pioneers’ Dinner talk about his magical early days in computer graphics and hearing his plea that the pioneers foster that magic by mentoring young people. And then visiting The Studio, which was even more alive than ever before with workshops and general artistic inventiveness, and seeing some of that magic in action.

On the last day, I moderated the production session “Industrial Light & Magic: New Solutions for New Challenges.” Watching Eddie Pasquarello, Ben Snow, and Nigel Sumner present the studio’s work on Cowboys and Aliens, Pirates of the Caribbean: On Stranger Tides, and Transformers: Dark of the Moon would have been great as a member of the audience, but it was even better to have a front-row seat. All three were so willing to convey their knowledge and experiences. Again, sharing the magic.

The more than 15,872 people made SIGGRAPH 2011 reportedly Vancouver’s biggest conference. With one of the largest attendance numbers outside of LA for the past few years, that’s a promising sign for SIGGRAPH. And, networking with your best friends among the thousands is, as always, the best thing of all. 


3D printer output

This year, I had two standout dinners—one with Maria Elena Gutierrez, the director of the VIEW conference, one of my favorite conferences ever, which I promised to attend in October. And, a surprise dinner with Professor Donald Greenberg, such an inspiration. He beamed with excitement over the technical paper from MIT on Computational Plenoptic Imaging. I’ll leave you with the explanation from authors Gordon Wetzstein, Ivo Ihrke, Douglas Lanman, and Wolfgang Heidrich, which promises visual possibilities beyond my imagination: “The plenoptic function is a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two-dimensional, spatially-varying subset—the common photograph. In this state-of-the-art report, we review approaches that optically encode the dimensions of the plenpotic function transcending those captured by traditional photography and reconstruct the recorded information computationally."

See you in Torino, Italy, in October? (If not, be sure to watch for my report.)