CG In Another World
Issue: Volume: 32 Issue: 12 (Dec. 2009)

CG In Another World

When we think about the first films to convince directors that visual effects created with computer graphics could open their imaginations, two films immediately come to mind: James Cameron’s The Abyss, in which a transparent CG character communicated with an actor, and Cameron’s Terminator 2, which starred a digital, liquid terminator and is lauded as the first movie to show the power of a digital pipeline. Both films won visual effects Oscars, as did Cameron’s Alien before, and Titanic after. Titanic, released in 1997, still holds the record for the largest box-office revenue: $1.8 billion. It was the last feature film Cameron had made. Until now.


A new facial motion-capture system devised by Weta Digital captured actor Zoe Saldana’s
facial expressions and mouth movements to help animators give Neytiri, a CG character, an
 emotional performance.

The long-awaited and highly anticipated Avatar, written, directed, and produced by Cameron and released by Twentieth Century Fox, pushes digital filmmaking into new worlds. It will immerse audiences in an alien environment, one created entirely with computer graphics and projected, in theaters so equipped, in stereo 3D. Cameron used a Pace Fusion 3D camera to film the live-action segments, but they comprise a small percentage of the film. Weta Digital created the alien planet Pandora and the CG characters and creatures that inhabit it, animating the characters using data from actors’ performances on motion-capture sets. Will it have the same impact on visual effects as did Cameron’s earlier films?

“It certainly changed the way we do things,” says Joe Letteri, senior visual effects supervisor at Weta Digital. “We had to go through a complete re-tooling and re-architecting.” Now a partner at Weta, Letteri has won visual effects Oscars for two episodes of The Lord of the Rings and for King Kong, along with an Oscar nomination for his work on I, Robot while at the New Zealand studio.

In particular, Letteri notes, the studio revamped systems for real-time facial motion capture and muscles, created methods for growing a rain forest in which most of the movie takes place, implemented new lighting techniques, built a compositing pipeline to handle stereo 3D, and more. “We could not allow ourselves to cheat anything,” he says. “Everything had to be done correctly; there was no place to hide.”


Weta used an absorption-based subsurface scattering routine to give the blue-skinned avatars
 and Na’vi a fleshy, believable look.


In the film, Jake Sully (actor Sam Worthington), a paraplegic war veteran, is given the opportunity to inhabit the athletic body of an avatar. He opts in. His avatar is an alien, a Na’vi, a race of humanoids that populate the planet Pandora. He, like all Na’vi, is blue. A 10-foot-tall biped with a stretched, cat-like body. Almond-shaped eyes. Tail. Pointed ears. Through his avatar, Jake immigrates to Pandora, a lush planet filled with waterfalls, jungles, and six-legged creatures, some of which fly. There he meets the beautiful Neytiri (actor Zoe Saldana) and assimilates into the Na’vian culture.

Everything on Pandora—every plant, creature, and character—is digital, created by artists using computer graphics tools and moved by animators working with keyframe and motion-capture data.

“The planet was really inspired by Jim’s [Cameron] underwater dives,” Letteri says. “There’s bioluminescence. The creatures have blue skin, and the animals have vivid patterns. We all know the rules: Big animals don’t have vivid colors. But, they do underwater, and Jim said they can exist on this planet. So we brought that color palette to the surface and made it believable. However, the big thing was that Jim wanted to do facial motion capture.”

Performing Characters

For Gollum in The Lord of the Rings, Weta had captured Andy Serkis’s body, not his face. For King Kong, they glued markers on Serkis’s face and captured him in a high-resolution volume, and then retargeted the motion data to Kong’s CG face. “Jim didn’t want to go that route,” Letteri says. “He was more interested in a video head rig.”

To make a head-mounted system that would encumber the actors as little as possible, Weta decided to create software that could track facial movements using one camera. Then they took it a step further by re-projecting the motion onto a 3D model in real time.

“We knew Jim would have real-time motion capture on the stage for the characters, and would be recording the faces,” Letteri says. “We thought, wouldn’t it be cool if we could do real-time faces? We knew he was coming in six weeks, so we did some all-nighters and got a system working.” When Cameron arrived, he could see actors on stage wearing a head rig that was driving the facial expressions for a CG character in real time.

Stephen Rosenbaum—who had been on the crew at Industrial Light & Magic for The Abyss as a CG artist, was a CG animator on Terminator 2, and who had won a visual effects Oscar for Forest Gump—was the liaison between Cameron and his Lightstorm group in Los Angeles and Weta in New Zealand. He helped integrate Weta’s creatures, avatar puppets, and facial-capture system into previs and the real-time motion systems developed by Lightstorm and Giant Studios. Rosenbaum was one of six visual effects supervisors at Weta who worked with Letteri on the film. The other five were Dan Lemmon, Eric Saindon, Wayne Stables, Chris White, and Guy Williams.

“Lightstorm created environments at a previs level,” Rosenbaum explains. “We created the creatures and character puppets at Weta that they used within the environments. Giant used our puppets during motion capture. And, when they had scenes where actors needed to interact with creatures, we also provided pre-animated characters so they could see the action during motion capture.”

Giant and Lightstorm performed the real-time motion capture that allowed Cameron to see the CG version of the film at a game-quality level as the actors performed in a motion-capture volume approximately 40 feet wide by 70 feet long. Giant set up the volume using close to 120 industrial cameras from Basler Vision, and handled the re-targeting, in real time, of motion from actors onto the rendered, 10-foot-tall aliens. Lightstorm’s virtual cinematography system, developed by Glen Derry, blended the characters into the virtual set using Autodesk’s MotionBuilder for real-time rendering.

Pandora in Stereo

When the characters run past Pandora’s digital plants, they look like they’re in a deep jungle in stereo 3D because Weta integrated and composited the elements volumetrically. “We did volumetric lighting, smoke, fire ... everything became volumetric,” says Joe Letteri, senior visual effects supervisor at Weta Digital. “It’s all depth-based. We have our own proprietary version of [Apple’s] Shake, so we wrote a stereo version that does everything in parallel, and we had a 3D depth compositing system inside. We also worked with The Foundry on its new stereo tool sets for Nuke. Because of the stereo, it wasn’t practical to shoot elements for anything; it all had to be spatial.”

On set, Cameron could look at the output of the Autodesk MotionBuilder files from the performance-capture sessions in stereo and adjust the camera so that Weta knew the interocular distance that he wanted and where he wanted the convergence plane. “He goes for a natural feeling,” Weta VFX supervisor Eric Saindon says, “a window into a 3D space. He seldom brings things past the convergence plane, but he definitely draws your eye where it should be.”

Creating the stereo version of the film was, as it turned out, not much of an issue. “Our 3D implementation has been really good,” Saindon says. “Because we know everything is correct in [Autodesk’s] Maya, we don’t do the stereo 3D until Jim buys off on the 2D. Then we render the other eye. The early shots were awkward, but the later sequences worked well. At the end of the day, the stereo 3D was less of a factor than we thought it would be.”

“We could tie into the body capture and add our facial capture simultaneously,” Rosenbaum says. “So [Cameron] could see the body performance and the facial gestures happen [on the CG characters] with the dialog, which was a nizce feature.”

The real-time facial performances weren’t always practical—video projection onto the characters’ faces was sufficient for all but the most subtle scenes. However, Letteri believes it’s game changing.


Weta modeled all the plants in the rain forest on Pandora, seen here virtually, using a
 rule-based growth system. Some plants have as many as one million polygons.


“It’s one of those things,” Letteri says. “You can see a motion-capture demo, and it’s kind of interesting. But, on set, seeing actors and CG characters performing at the same time, well, that’s really cool. It doesn’t even demo well in a video. When you’re there, it’s a whole different feeling. You have to see it in person.”

Rosenbaum estimates that more than 80 percent of the film is virtual. “We’re delivering about 110 minutes of full CG,” he points out. “I would guess that another 20 minutes have a combination of CG and live action. And, there are some other VFX facilities helping out. We sent some flying creatures, Na’vi, environments, and vehicles to ILM, Framestore, and a few other vendors, as well. But, the bulk of the CG work is being done at Weta.” The list of other vendors that worked on previs and postvis for the film includes BUF, Halon, Hybride, Hydraulx, Lola, Pixel Liberation Front, Stan Winston Studio (now Legacy Effects), and The Third Floor.

Capturing Faces

Each actor captured on set wore a helmet with a lipstick camera attached to a boom arm, and green makeup dots on his or her face. The crew positioned the camera between the actor’s nose and upper lip to capture the mouth movement and to see the eyes. To paint the dots, the makeup artists used a vacuform mask cut with small holes designed for each actor. “We’d put the mask on the face, draw a pen mark for the dots, pull it away, and paint on the green dots,” Rosenbaum says. “The actors loved it. It took only five or 10 minutes and they were back on stage.”

To plot the dot pattern, the facial motion-capture crew had first taken video of the actors doing a FACS session—creating particular expressions, mouthing phonemes, doing prescribed facial gestures—and, if they had dialog, saying their lines. The FACS analysis helped the crew identify major muscle groups for each face so they could position the dots, sometimes as many as 70, most effectively.

For the eyes, Weta developed software to track the pupils. “We had an LED array around the camera so we could illuminate the face and see the pupil clearly,” Rosenbaum says. “And if we couldn’t get good data, we’d track the pupils from the video. Traditional facial capture has always been a problem, but I think our eye movement is fantastic. It sells the characters.”

The eye movement was particularly important because although the avatars have eyebrows, the Na’vi didn’t, so their eyes needed to express much of their emotion. Yet, the iris in the Na’vi eyes was so big, the white of their eyes showed only when they were shocked.

“We ended up adding a stripe pattern to suggest eyebrows,” says Andy Jones, animation director. “We studied Zoe’s [Saldana] expression, and found it was really tricky to get the same feeling on her CG character without eyebrows. To prove it to [Cameron], I roto’d Zoe’s eyebrows out of her face, and he realized what we were up against. That’s when we textured in a pattern to get the feeling of eyebrows back in there.”

The motion captured from the actors on stage drove a facial system developed by Jeff Unay on their corresponding CG characters. To help with the lip sync, character designers had created the lips on the Na’vi to match those of the actors performing them. “We kept the characteristics of the actors and reshaped them into alien characters,” Letteri says. “That gave us a good basis.”

 “Solving” software applied the data to Weta’s facial system, and a facial-solving team adjusted the result. The motion data worked best for lip sync and mouth movement; animators spent more time tweaking brow and eye animation. “When the overall expression straight out of the facial solve was not what it should have been, the team would push the data around to get the right poses and extremes, yet still keep the live feeling of the data,” Jones says. “As the team adjusted poses with sliders—they called it ‘tuning’ because they tuned the solve on various frames—the solving software learned which poses to use.”

Unay based the underlying system on blendshapes. “We started with a dynamic muscle rig for the faces, but although it was good at preserving volume, it was coming up short in terms of level of detail,” Jones says. “[Cameron] was very specific. If he saw tension in Zoe’s mouth, he wanted exactly that [in Neytiri]. We had to art-direct and sculpt her face.”

So, Unay modeled blendshapes to mimic a volume-based system using FACS, which describes the muscle groups that control parts of the face. Thousands of shapes. The resulting rig for Neytiri, for example, has 1500 blendshapes. “The animators use sliders that control only about 50 shapes at a time,” Jones says. “The system switches to banks of shapes depending on which muscle sliders they move. It all happens under the hood without the animators knowing. The combinations of shapes look amazing; the skin looks like it’s pressing and pulling.”

As the animators worked in Autodesk’s Maya, they could bring up, on their screens, reference video shot in HD from multiple angles. “We could see the skin and get the timing from the helmet camera, but it distorted the face too much to see the overall mood,” Jones says. “We needed cameras farther away.”


Animators at Weta persuaded director James Cameron to add a stripe pattern to suggest
eyebrows on the Na’vi’s faces to help give the computer-generated characters the same
emotional feeling as the actors performing them.


Animating Performances

Animators also keyframed Na’vi ears and tails. “We’d whip their tails around if they were upset, and use them as a counterbalance when they ran,” says Jones. “They were like another appendage. We also found the ears really useful for adding emotion to the character.” The ears tell when a Na’vi is angry or shocked, just as they do for cats and dogs.

For the Na’vi bodies, the motion capture worked extremely well. “Giant’s body capture was fantastic,” Jones says. “We still had to animate their hands and fingers, but the offsets and targeting and retargeting was well done. They kept the weight. And, the data was clean.”

The characters’ design might have helped with the retargeting. Rather than completely altering the human proportions, the designers created the Na’vi with similar proportions to humans, but with slim hips, narrow shoulders, and long necks. “It made the retargeting process easier,” Jones says.
Oddly, although animators often use motion-captured data to add the tiny movements to help bring alive a character that is standing still, Weta’s animators found themselves adding jitter to the mocapped data in some cases.

“When someone was yelling or screaming, the high-frequency jitters were often filtered out,” Jones explains. “The system couldn’t distinguish between muscle shake and noise precision. So we would animate it back in, and all of a sudden it felt like the characters were screaming, not just opening their mouths. We had the body muscle rig, but when a bicep fires, there needs to be a jitter. When [Cameron] saw us doing that, he really loved it.”

The muscle rig is new, developed at Weta specifically for this film. “It’s a dynamic system that simulates muscles properly,” Saindon says. “It calculates the fat layers and colliding volumes much more accurately than in the past.”

Prior to this, after animation, the character TDs needed to fine-tune the look of the character and fix problems—intersections, muscles that didn’t look right, and so forth—by sculpting the character on a shot-by-shot basis. With the new system, that was rarely necessary.

“We’d get something much more accurate and realistic straight out of the box,” Saindon says. “We had to do little in the way of going back and fixing things.”

Creatures


In addition to the characters, Weta animators performed approximately 10 creatures, a hellfire wasp, and thousands of insects. “Every single frame has something alive in it, whether it’s a moving plant or bugs,” Williams says.

Of the creatures, four fly and most have six legs. “Our first approach was typically to hide the middle legs, animate the animals as quadrupeds, and then bring the middle legs back in,” Jones says. The animators might animate a horse-like creature by having the leg movement cascade, or change the gait by changing the offset. A cat-like creature might arch its back, lift its front legs, and use them as arms and hands.

Jake learns to ride a creature that looks like a flying horse, and for those shots, the crew used a gimbaled motion-control rig. “The good thing about motion capture was that it gave us the posing [Cameron] liked for the character on top of the creature, where the character should be looking, and the riding style,” Jones explains. “But it was obvious that his legs weren’t reacting to his chest popping up and down, so we couldn’t use the motion capture completely.”

Am I Blue?

Facial capture was perhaps the biggest challenge. The second biggest challenge for the technical team was keeping the aliens from looking like someone had poured blue paint on them. “It was a tricky problem,” Letteri says. “They needed to have warmth under their skin, so we had to find the right shades of blue and blood color that would look good in firelight, blazing sun, overcast skies, and rain. Blue skin quickly wants to look like plastic.”

Seeing Virtual

To film the CG characters and creatures in their digital world, James Cameron used a virtual camera. “Imagine a nine-inch LCD screen with a steering wheel around it and tracking markers on it,” says visual effects supervisor Stephen Rosenbaum. “A stage operator would load the CG puppets and environment and set up the lighting, and then Jim [Cameron] would pick up this virtual camera and move it around the environment. It drove [Autodesk] MotionBuilder’s camera, so he could see the characters perform and set up camera angles as they delivered their performance.”
With traditional motion capture, directors record the performances, edit them, and then derive the camera angles. With this system, Cameron could move around the performance stage and compose shots while seeing the actors’ performances, including facial expressions on the CG characters.
“He could dolly in, pan, boom, have any rig he wanted,” Rosenbaum says. “He could have a huge crane, a wire rig, a steadicam, a dolly rig. It didn’t matter. There was a three- or four-frame latency when we were doing full-body and facial performances, but it wasn’t significant enough to affect his shooting.” –Barbara Robertson


For skin texture reference, the crew did photo shoots under controlled lights of young people with the most perfect skin they could find. “We discovered that even someone with nearly flawless skin still has lots of imperfections in displacement and color. They have nodules, bumps, pink around their eyes, and blotchy layers,” Williams says. Painters added these imperfections to the texture maps and created a pore structure for the aliens that looked realistic. All this helped make their skin come alive.

As for the color, even though the aliens had blue skin, the crew put red blood in their veins, and did so without turning their skin purple. “Before, we had more of an analytical approximation for subsurface scattering,” Williams says. “We went to an absorption-based subsurface scattering routine. The system we use now does proper frequency-based scattering.”

Because they used the actual wavelength for red transmission through the nose, ears, and pores of the skin, the red blood didn’t cause the blue skin to turn purple. They also added a little red to the skin tone. Then, they applied some of the same techniques and shaders written in Pixar’s RenderMan to the plants.

Deep in the Jungle


“We cross-pollinated the efforts,” Williams says. “The plant shader now uses the skin shader.” The plants, however, aren’t blue, even though they started that way. Blue light from a blue sky bouncing off blue plants onto blue-skinned characters created uninteresting images.

“We needed to have other colors hitting the characters’ skin to give them the kind of complexity that helps make them look real,” Williams says.

At night, as the characters walk through the jungle, the plants glimmer with bioluminescence. The CG artists used subsurface scattering to cause thick plants to glow like a wax candle. “Some plants just have a glowing moss over them,” Saindon says. “It depended on the plant and how [Cameron] felt it should look.”

To create the rain forest, the Weta artists started with FBX files from Lightstorm that they imported into Maya scenes. “We had simple representations for where the trees and plants were,” Saindon says. “Jim moved and placed things where he wanted for camera angles. So, we did a one-to-one match at first to get a layout that he specifically liked.”

Because the plants needed to be dynamic, all of them are models created using a rule-based growth system. Although they average 100,000 polygons, some have as many as one million polygons.
“The plant-growing tools were almost like a modeling tool,” Williams says. “Once we grew a plant, we could instantly create variants by changing the seed value for the random functions.” The variants might change the number of branches and sub-branches, the height, the silhouette, the age, or other parameters.

The crew planted the jungle using painting techniques to place trees, shrubs, and grass. “It’s similar to [Maya’s] Paint Effects, but we aren’t creating geometry,” Saindon says. “The system is taking pre-existing geometry and placing actual full-res models at correct angles on the ground.”

They also used Massive’s software to grow forests. When artists planted seeds on a terrain, Massive would simulate a forest growing and competing for light and space. Bigger trees grew quickly, smaller plants died, and shade-loving ferns grew around the base of the large trees.


Jake Sully (actor Sam Worthington) prepares to inhabit the avatar body resembling a Na’vi,
 seen forming in the tank behind. The color palette for the film reflects James Cameron’s
fascination with the underwater world.


“We’d create large areas, and then on a shot-by-shot basis, would sculpt scenes to play well for the camera and the depth of the scene,” Williams explains. “All of our show is done inside Maya, and everything in the jungle is 3D, so when you move the camera around in Maya, you get a real 3D sense.”

To light and render the massive jungle, Weta implemented two techniques: stochastic pruning and spherical harmonics. The stochastic pruning threw away unnecessary geometry on the fly as a plant moved away from camera. “It might take a fern with a million polygons and push it back to a few pixels when it’s in the distance,” Saindon says.

Spherical harmonics, a technique used for real-time rendering in video games, made it possible to light the rain forest. “Basically, we store coefficients for angles,” Saindon says. “We calculate the harmonics for each individual plant, all the lighting angles, and store that on the geometry. That allows us to drop simple lights into the scene and still get proper occlusion from each plant. The plant does its own self-occlusion using its own harmonics, seeing what should be occluding what, and stores the information. That means we can light an entire jungle with one light. We could get complex lighting with a very simple setup. We couldn’t have done the movie without it.”

Even so, the data processing requirements for the show were enormous. In addition to the characters, Weta created volumetric explosions, fireballs, 3D water simulations, and other effects. “Joe [Letteri] set down the hard line,” Williams says. “He told us not to plan on cheating anything.” At one point during postproduction, the studio was generating 110gb of data an hour.

“Jim Cameron’s expectations are extremely high, and he demands a lot,” Rosenbaum says. “The scope of CG movies is getting so large and the time constraints too tight, that people tend to compromise, but Jim doesn’t compromise. He insists on a high standard. When I worked on The Abyss, it took us six months to create 90 seconds with the pseudopod. We went into it with the same question we had on this film: How the hell will we do this? And we had the same mind-set: We’ll put our heads together and figure it out. He’s always one to push a VFX company. And he certainly did it on this one.”