Science Meets Art
Issue: Volume 34 Issue 1: (Jan-Feb 2011)

Science Meets Art

Each February, the Academy of Motion Pictures Arts and Sciences acknowledges the science half of the organization’s name by presenting its Science and Technical Awards. This year, the Academy singled out 22 people who contributed to 10 SciTech awards. Among them, Tony Clark, Alan Rogers, Neil Wilson, and Rory McGregor of Rising Sun for developing CineSync; Mark Sagar of Weta Digital for his early and continuing work on facial motion capture; and Arnauld Lamorlette now of The Bakery and Eric Tabellion of PDI/DreamWorks for their work on global illumination at PDI.

Ten years ago, Clark began looking for a better method of submitting shots to visual effects supervisor Jeff Okun than by mailing Betacam tapes from Rising Sun in Australia to Los Angeles. Sagar was trying to put virtual humans on the Internet. And at PDI/DreamWorks, Lamorlette and Tabellion were proving to ESC Entertainment that they could create photorealistic images by using global illumination to light a test shot for The Matrix Reloaded.

The technology these men were working on then has led to fundamental changes in the visual effects industry today in the case of CineSync; to last year’s astonishing facial animation in Avatar and, now, to mind-blowing areas of new research for Mark Sagar; to new ways for artists to work at PDI; and to a new product scheduled for release this year from The Bakery.

Collaboration

Technical Achievement Award (Academy Certificate) to Tony Clark, Alan Rogers, Neil Wilson, and Rory McGregor for “the software design and continued development of CineSync, a tool for remote collaboration and review of visual effects. Easy to use, CineSync has become a widely accepted solution for remote production collaboration.”

In 2000, Rising Sun, a postproduction studio in Adelaide, Australia, had landed work for its biggest-budget film to date, Warner Bros.’ Red Planet. “We were a long way away,” says founder and visual effects supervisor Tony Clark. “It takes about five days to ship Betacam SPs. So we had begun sending QuickTimes to Jeff Okun, the overall visual effects supervisor.”

Clark would open the QuickTime in Australia. Okun would open the same file in Los Angeles. And, they’d talk on the phone. “I’d say, ‘Go to frame 100,’ ” Clark says, “and then, ‘Do you see the yellow blotch to the right? Make it a little more red.’ ”

It quickly became clear that having a nice chat about the image wasn’t enough. “We’re in south Australia, so it’s very challenging to compete,” Clark says. “Having a conversation on the phone is fine, but we needed to show our work in progress and take quality feedback to our clients. If you’re having a creative collaboration, you need to see a common image and refer to it. You want to point to it.”

The studio’s research group had already created a color management tool that allowed people in their offices in Sydney and Adelaide to share a film recorder. Similarly, the first implementation of the tool that would become CineSync helped artists communicating only between those two offices. “The first versions were no more than a QuickTime player,” Clark says. “They went backward and forward and stayed in sync, and you had the ability to point. It was so simple an idea, but so effective.”

When shots for Harry Potter and the Goblet of Fire landed at Rising Sun, the studio shared CineSync with overall visual effects supervisor Jim Mitchell. “After a couple sessions, he asked if he could use it with other people,” Clark says. “So we changed it to use it with another company. Then other people started asking if they could use it or buy it for another show, and it started to flow out.”

Although some studios had also created their own remote previewing systems, CineSync became the first system available for sale in the market. And as the Internet became more pervasive and faster, CineSync grew with it. “We had one user, then two, three, 10,” Clark says. “We have over 100 licenses now. Each of those can have four accounts, and each of those can have 25 users. It’s all around the world. I feel that CineSync has been an enabler to the global visual effects world, for better or worse.”

In fact, so many studios use the program that it’s become a verb. These days, people use it not only to communicate between studios halfway around the world, but also to share files with those a few blocks away. “We’re finding that more and more people CineSync across town,” Clark says. “They might be in Burbank and need to talk with someone in Santa Monica at 4:00 in the afternoon during rush hour.”

Clark has even reviewed shots using his laptop and cell phone from airport lounges. “It’s not optimal, but it works,” he says. He envisions a time when people sharing visual information in areas other than visual effects—mining, oil exploration, medical imaging, education, and so forth—begin using the system. But, that isn’t a main goal at Rising Sun.

Rising Sun Research, the group responsible for CineSync, runs autonomously from Rising Sun Pictures. Clark says they make enough money from software sales to sustain their own development team and to move the research forward. “It doesn’t make us rich,” he says. “We’re a small company, and it isn’t our main focus, but a key tool. It enables us to do our job. It’s important for Rising Sun Pictures, as a vendor of visual effects work, to compete from Australia.”


Artists working in studios around the globe can collaborate by pointing to and annotating synchronized images using Rising Sun’s CineSync software.

And, unless someone were to infuse the company with capital to exploit other markets, it’s likely that Rising Sun will keep CineSync focused on visual effects. That’s fine with Clark, who’ll settle for fame, not fortune, in this case.

“This is an enormous honor,” Clark says of the SciTech award. “It’s really as good as it gets to have devised something like this, create something ubiquitous in the industry, and be recognized for it. It’s the best thing that’s happened to me in a long time.”

Facial Animation

Scientific and Engineering Award (Academy Plaque) to Dr. Mark Sagar “for his early and continuing development of influential facial motion retargeting solutions.”

This is Sagar’s second SciTech award. Last year he received a Scientific and Engineering Award with Paul Debevec, Tim Hawkins, and John Monos for the design and engineering of the Light Stage capture device and the image-based facial rendering system developed for character relighting in motion pictures.

Mark Sagar moved into the world of visual effects and animation from bioengineering; he received his PhD in bioengineering from the University of Auckland (New Zealand), followed by postdoctorate research at MIT from 1996–1997. “I was working in surgical simulation when some businessmen asked me to apply the technology for faces I’d been working on to virtual actors,” he says. “That’s how I ended up in Hollywood.” From 1997–2000, Sagar was co-director of R&D for Pacific Title/Mirage, which focused on virtual actors, and held the same title at LifeFX from 2000–2001, a company that attempted to move the technology onto the Internet. Then, the stock market bubble burst, and Sagar found himself looking for work with an amazing resume in hand: He had directed the astounding short film “The Jester,” starring an animated photorealistic face, that premiered at SIGGRAPH in 1999. And, he had collaborated with Debevec to light his faces, an alliance that resulted in the Light Stage technology.

“Sony Pictures Imageworks hired me to apply the Light Stage imaging and rendering system to Doc Ock in Spider-Man 2,” Sagar says. “But while I was there, I looked at what they were doing with motion capture and proposed using a FACS system instead.” And that, together with his later work at Weta Digital, would eventually result in this year’s SciTech award.

Sagar had first seen research on using the FACS coding system with data from a tracked face while at MIT in the mid-’90s. “The methods weren’t applicable to film production, but I kept it in the back of my mind,” he says. And in mid-2002, he built a system that automatically calculated FACS values for expressions from motion-captured data; it analyzed the motion-capture points. He also created a system for calibrating actors; that is, a set of expressions based on the FACS coding system and ones he added for dialog.

“In the early 2000s, motion capture was mainly done by using the points to deform a patch of skin directly,” Sagar says. “The problem is that it introduced noise, it was difficult to see what was going on, and it was especially difficult to edit. If you have 100 points moving around on a face and you want to change an expression, there’s no way to do that because the points are tied to the skin. For example, if you want to open the jaw, because the points are tied to the skin, there’s no simple way to control that.”

At LifeFX, Sagar and his team had created photorealistic faces by capturing dense motion data they retargeted to digital characters. “It was easy to move the data onto a digital double,” he says, “but when you try to map the data onto a face that’s fundamentally different, the problems start coming. You can warp the data, which is what people were doing for some solutions. But, say you have an eyebrow. If you move it up on a human, the skin moves in a straight line. On a gorilla, it moves around the bone in a circular motion. There’s no easy way to change that linear motion to a circular motion mathematically and get the scale right. So I wondered, What’s the actual information? What’s the fundamental information that the face gives us? How can we break this down into something that’s universal? And that’s when I started experimenting with FACS. FACS is really an alphabet of facial expressions. It’s the simplest way to represent what’s going on in the face.”

At Sony Pictures Imageworks, Sagar convinced Damian Gordon, the motion-capture supervisor, to let him do a test for Monster House. “The text was successful,” he says. “They put it into production on Monster House.”


Mark Sagar, at left, received a Scientific and Engineering Award for his early and continuing work on retargeting motion data onto digital characters. A system he developed with a team at Weta Digital helped Avatar director Jim Cameron's see retargeted expressions on Na’vi in real time.


Then a year later, the New Zealand native needed to go home for family reasons. As luck would have it, though, Weta Digital had just started working on King Kong. “They were going to hand-animate King Kong,” he says. “But Joe Letteri [senior visual effects supervisor] was open to new ideas, and I got together with Andy Serkis, the actor who played King Kong. We had to do about 40,000 frames of King Kong animation to convince Peter Jackson, but we changed the way King Kong was going to be animated.”

The facial animation system Sagar has developed is flexible: It can work with data captured from any motion-capture system–points, Mova data, video images, and so forth.  It calculates the FACS expressions and maps them onto a digital character using blendshapes, muscle models, joints, whatever. “I have a way to map the FACS data onto whatever animation controls the animators use,” he says. “I had to come up with special ways of representing the information in order to calculate it, certain math tricks. But, it computes the data for them. And the good thing is that it allows animation and motion capture to be mixed, so animators have the best of both worlds.”

Once the idea of using FACS data entered the atmosphere, other studios began following suit. Meanwhile, for Avatar, Sagar and a crew at Weta Digital created a real-time system. “James Cameron could look through a virtual camera and see the Na’vi expressing and looking around live. We mixed that together with the body motion capture so you could see the characters performing in the environment.”

Because Cameron wanted the actors to wear helmet cameras rather than install cameras around the stage, Sagar used real-time computer vision techniques to track the face in the 2D images and compute the FACS expressions. “The good thing is that the face is a constrained system, so it works, even though the points are moving on a plane,” he says. “The system recognizes that someone is pursing their lips, so on the 3D model, it pushes the lips forward.”

Now, Sagar is moving deeper. He is working with a team on a full biomechanical simulation of the face. “I want to automate how a face is built in a physically realistic way,” he says.

To do this, Sagar is collaborating with researchers at the Auckland Bioengineering Institute, and reading. “I’ve read 100 plastic surgery journals,” he says. “I’ve been to dissections. I’ve had my head MRI’d.”

And, that means he’s now come full circle. “My background is in bioengineering,” Sagar says. “It’s fun to return to exploring the physical basis for all this.”

Global Illumination

Technical Achievement Award (Academy Certificate) to Eric Tabellion and Arnauld Lamorlette for the creation of a computer graphics bounce-lighting methodology that is practical at feature-film scale.

Arnauld Lamorlette and Eric Tabellion proved they could use global illumination to create photorealistic images in test shots for The Matrix, but PDI/DreamWorks, which then was still doing postproduction visual effects work, did not end up working on that film. Instead, Lamorlette, who was head of effects for Shrek 2, and Tabellion, who was on the R&D staff, decided to apply what they had learned to the sequel.


The character Hiccup (above), from How To Train Your Dragon, owes the soft lighting on his cheeks to technology developed at PDI/DreamWorks by Arnauld Lamorlette (at right, top) and Eric Tabellion(at right, bottom), for which they received a Technical Achievement Award.


“Our technique was OK for two shots in a live-action film, but for a full CG movie, it was too slow,” Lamorlette says. “Juan Buhler [effects lead] had been working on illumination techniques using point cloud particles to create fast subsurface scattering, so we thought, Why not use it for global illumination? It was great; using particles increased the speed tremendously.”

But, the pair decided to move in a different direction: to bake textures rather than use particles. “We were using NURBS, so it was easy to go from textures to parametric space,” Lamorlette says. “We had a whole pipeline already for filtering textures and removing noise. So by using textures rather than particles, suddenly the whole pipeline with texture was available. It was more stable than using point clouds, we could do less computation, and it was usable by people well inserted in the pipeline.”

Also speeding the computation was a decision by Tabellion and Lamorlette to reduce the number of times they’d send rays into the

environment. “We did a lot of tests and discovered that with just one bounce, which is called color bleeding, the quality was tremendous,” Lamorlette says. “Having two bounces added maybe five percent more. So, we decided to keep just one bounce and not pay the price for a small gain in quality.”   

The first film lit at PDI/DreamWorks using the technique was Shrek 2, which released in 2004. Tabellion claims it was the first big deployment of global illumination in a feature-length animated film. “We used it for characters, which were in 80 percent of the movie, and for maybe 30 percent of the environments,” he says. “We raised the bar visually because we could light the film differently.”

The studio has used it for every film since. “It’s become a de facto technique for lighting the movies,” Tabellion says. “It keeps the shot complexity down in terms of light rigging.”

In 2004, Tabellion and Lamorlette published a SIGGRAPH paper titled “An Approximate Global Illumination System for Computer Generated Films,” describing their “efficient raytracing strategy and its integration with a micro-polygon based scan-line renderer supporting displacement mapping and programmable shaders.”

Tabellion singles out the five main innovations: irradiance caching, which existed, but which they perfected and modified; raytracing optimization through the use of two levels of detail—low-resolution geometry for raytracing indirect illumination, and full geometry for final rendering; pre-computing and then baking direct illumination and diffuse shading into texture maps, which sped rendering by a factor of 10; using a one-bounce light shader and bounce-filter light shaders that could control global illumination in specific regions, which made the technique art-directable; and lastly, a new approximate lighting model that made it possible to apply irradiance caching on shiny materials.

Since 2004, Tabellion has made one significant change: a way to improve irradiance caching for surfaces that are displacement maps. “Other than that, the technique is as it was in the early days,” he says. “We’ve looked at other lighting models, but the people are comfortable with this, and they’re perfecting the technique. I can see that the scenes are better balanced. In the early days, there was a tendency to add traditional lights. Now, the artists are dealing only with global illumination, and it shows. So, I’ve been working on other topics.”

For his part, Lamorlette, co-founder and CTO at The Bakery in Gemenos, France, is working on a suite of lighting tools that the company plans to launch by the end of the first quarter. The suite includes an interactive lighting application, a rendering engine, and tools for managing the lighting process. “It’s a new way to approach lighting and re-lighting,” Lamorlette says. “We’re using new global illumination techniques and caching as much as possible. It’s a simple approach, but the technology is not simple: The more you work on an image, the faster it is. We think it will transform the rendering and lighting process of feature films and TV.”

And More…

In addition to these inventors, the Academy gave Technical Achievement Awards to several people who created render queue systems: Greg Ercolano for the design and engineering of a series of software systems culminating in the Rush render queue management system; David M. Laur for the development of the Alfred render queue management system; Chris Allen, Gautham Krishnamurti, Mark A. Brown, and Lance Kimes for the development of Queue, a robust, scalable approach to render queue management; and Florian Kainz for the design and development of the robust, highly scalable distributed architecture of the ObaQ render queue management system.

In 2000, of the 19 science and technology awards, three were given to people working with computer graphics, including an Award of Merit to Rob Cook, Loren Carpenter, and Ed Catmull for inventing RenderMan; a Technical Achievement Award to George

Borshukov, Kim Libreri, and Dan Piponi for what has become known as the Bullet-time technique; and another Achievement Award to Venkat Krishnamurthy for creating Paraform software to automatically convert data from scanned physical models into 3D computer graphics models.

This year shows just how much the industry has changed and how important computer graphics is to filmmaking today: All the awards but three centered on computer graphics. We can expect that trend to continue as CG technology and techniques evolve in the future.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.