Chimp Change
Issue: Volume 34 Issue 7: (Aug/Sept 2011)

Chimp Change

When the leading role in a critically acclaimed film is a CG character performed by a human actor, who receives the accolades? The actor who created a riveting performance without speaking a word? The visual effects artists who translated that performance onto a dissimilar face and body?

Numerous critics call Caesar, the chimpanzee star of Rise of the Planet of the Apes, the most expressive character in the film. But, it’s more likely that Weta Digital animators and artists will win Oscars for Best Visual Effects than Andy Serkis, the actor who aped Caesar, will win an Oscar for Best Actor. That there is buzz about Serkis receiving a nomination is a tribute to the animators and the visual effects artists, not just the actor.

To turn the CG apes into film stars, the artists at Weta drew on their experiences in applying motion and performance capture to several characters in previous films: Gollum in The Lord of the Rings, King Kong in his film, and the Na’vi in Avatar. So, it made sense for the producers and director Rupert Wyatt to look for help from Weta.

“The schedule was so compressed that Fox brought us in and asked if we could do it,” says Joe Letteri, senior visual effects supervisor. “Fox knew that they wanted the apes to be all-digital, and that was a big part of getting the film done. Because the schedule was tight and everything was so intertwined, we had to say we could take it all on. From start to finish, it took about a year.”

Given the tight schedule, the chimpanzee star, and the number of apes to animate, the crew at Weta Digital knew they wanted to use performance capture, a decision that worked out especially well once Serkis, who had played Gollum and Kong, agreed to perform Caesar.

But, they also knew that although they could draw on some systems used in the past, the facility would need to develop new technology, as well. “We wanted to take performance capture into the real world,” says Dan Lemmon, visual effects supervisor. “There was so much interaction between the apes and the human actors, the idea of doing extensive onset motion capture was a natural direction to try to go. To do that, we needed to deal with sunlight, hot set lighting, and have a system that was portable and flexible enough to adapt quickly and move with the production.”

Although Weta had experimented with on-location motion capture for The Lord of the Rings and had pushed the state of the art to capture the actors’ performances on set for Avatar, the studio hadn’t done on-location performance capture at the level required for this film. No one had.

“We based a lot of what we did for this film on what we had done for Avatar,” Letteri says. “On Avatar, our facial-capture technology moved from glued-on markers to the head rig.” (The “head rig” is a helmet with a camera that records an actor’s facial movements. A compact flash drive stores the recorded video.)

Capturing facial movement for this film was important, too, but the apes communicate primarily through pantomime, so body capture was also critical. Thus, because chimps and humans are close in size—much closer than King Kong orAvatar’s 10-foot-tall Na’vi, on-location performance capture made sense. “We knew we had to [create the apes] this way,” Letteri says. “Rupert Wyatt knew about it. And, of course, we had done this before with Andrew Lesnie [cinematographer] for The Lord of the Rings. It was a good working team; everyone knew what needed to be done.”

On set, that meant the production crew worked with the Weta team to be sure they got all the data they needed. For its part, Weta tried to be quick. “We tried hard not to hold things up,” Lemmon says. “And, everyone on set was pretty accommodating. The question was always, Do we hold things up for 10 minutes or move on?”



At top, animators used keyframe animation for Caesar’s fingers, which weren’t motion-captured. At bottom, Weta captured the performances of as many as six actors at a time wearing head rigs and suits with LED markers, on location.

Bright Lights

Prior to filming, Weta prepped all the actors who would be on set, fitting them with the head rigs and new capture suits containing active LEDs. “We didn’t use optically reflective markers,” Letteri says. “We built LED suits. This was new technology for the on-set capture.”

Although the team had considered having actors wear specially marked suits, capture their performances on set with witness cameras, and convert the motion into data later by using optical tracking techniques, they felt that a direct motion-capture system would provide higher-fidelity data. But, the traditional motion-capture systems they had used for previous films were too bulky and cumbersome to have on set for an entire production.

“The challenge was to boil motion capture down into a portable, flexible, lightweight system that we could reconfigure quickly,” Lemmon says. LED suits provided that solution. Moreover, “the LEDs weren’t affected by the set lighting, and they worked in all kinds of situations—outside in sunlight, with light reflecting off cars,” Letteri says. “We could phase the LEDs so the motion-picture camera couldn’t see them and so we didn’t have stray light bouncing into the camera.”

The system, built to Weta’s specifications,included a control pack worn by the actor that communicated to a computer via an industrial-strength, long-reach Bluetooth transmitter.From the control pack, six strands of wires with attached LEDs stretched around the actor’s body. Circling each LED was a colored marker. Four of the strands were Velcro’d onto the limbs, with one strand extending down each arm and leg. The other two ran down the front and back, with the head sharing the back strand.

“It’s an extension of a traditional optical motion-tracking system,” Lemmon explains. “We used the cameras and pieces of software that we would use for a permanent capture volume, but the biggest thing was dealing with sunlight. A traditional motion-capture system couldn’t handle that intensity of light. So we switched from reflective markers to the active LED system and flashed infrared light back to the cameras. We synched the LED markers to motion-capture cameras that we synched to the motion-picture camera’s time code.”

On location, the team ran two units simultaneously, with as many as eight actors in the LED suits: two in one area and six in the other. Approximately 80 cameras captured the action, with half in each area. In addition, four witness cameras shot the actors’ hands and faces, and provided a wide-angle view of the bodies.

“It was a huge amount of data,” Lemmon says. “We would fill several hard drives daily. At the end of each day, we would duplicate the data, send a version from Vancouver to New Zealand, and clear the hard drives for the next day’s work.”

Setting Up

When possible, the crew would mount the motion-capture cameras permanently on the sets or stages. When the action took place on city streets and it was impossible to affix the cameras permanently, they used aluminum truss towers on rollers that are similar to those used for concert lighting. At the end of each “T-bar” across the top, the crew attached pairs of calibrated cameras.

“We developed new techniques for pre-calibrating cameras in pairs, some new math, some new software technology that we bolted onto our existing system,” Lemmon says. “Also, we had new techniques for running the calibration processes more quickly once the cameras were in situ. It was a big paradigm shift for our motion-capture technician.”

The paradigm shift reduced the time needed on set from the typical week-long effort for setting up, aiming, and calibrating the cameras to 20 minutes.

“The setup time was really good, and the calibration was fast,” Letteri says. “Some of it was due to practice. A lot was having pieces pre-rigged so we could get everything into position as quickly as possible. We tried to prerig the day before. If you have the space covered, and you just have to move a few cameras to shoot ‘over there,’ the calibration is easier to deal with; you don’t have to recalibrate all the cameras.”

On occasion, the crew ran into situations in which they needed to re-rig the entire set, but when that happened, the production needed to change gear at the same time, too, so reconfiguring the mocap setup didn’t cause delays.

Most of the capture volumes were inside 20- by 20-foot-square greenscreen sets that the Weta crew would later extend digitally. One of the biggest volumes, though, was the 300- by 60-foot set built on a back lot in Vancouver for the climactic sequences on the Golden Gate Bridge. On that set, the crew could capture about half the space at one time. “We had a few dozen cameras in bird houses all the way down the length,” Letteri says.

In addition to replacing the actors whose performances they captured on the bridge, the effects crew replaced the set with a digital bridge, put it over the water, and added the digital background of San Francisco Bay.

“Capturing the performances on set was a lot of fun,” Lemmon says. “It was a unique and different problem than those we normally deal with on set when we’re doing visual effects. We felt more integrated with the production process. A lot of the time we’re there to grab data as quickly and surreptitiously as possible. In this case, there was no avoiding being part of the production. We provided the main characters for many of the shots in the end, so we had to make sure everything for principal photography was correct.”

 Give Me Hair

To coat the apes with dynamic hair, Weta Digital created a new, interactive fur grooming tool. “For Kong, we had a procedural system,” says Joe Letteri, senior visual effects supervisor. “The new system lets us comb and groom the hair. We wanted to groom the hair directly. And, we wanted our artists to do dozens of apes. So we developed some clever software that uses curve geometry with some hardware acceleration. It’s closer to being fun to use.”

The new system still uses guide hairs and some form of interpolation. The innovation is in the way the artists groom the hair. “It’s more of a sculpting process,” says Dan Lemmon, visual effects supervisor. “We use digital brushes and combs to push the guide hairs around. The artists don’t manually pull curves or use sliders. They use styling tools. They apply control with a brush rather than with parameters. It’s a bit like using a paint program’s brush, but the radius also takes into account direction and orientation. With each stroke, the artist changes the guide-curve position, orientation, and the amount of noise. It’s a completely new tool with its own way to store a fur set for a character.”

The tool, dubbed “Barbershop,” rides on top of Autodesk’s Maya and plugs into Pixar’s RenderMan. –Barbara Robertson




Moving On

In addition to Serkis, actors of varying sizes played other apes during principal photography—small people playing small apes, a large actor playing the gorilla. Six featured apes and another dozen variants star in the film. “We needed them to carry the scenes in pantomime,” Letteri says. “Andy Serkis and Terry Notary led the charge, getting everyone to look at how to perform apes.”

To help the actors, Weta Digital set up a capture volume on a separate stage where the actors could rehearse and see themselves puppeteering the CG apes in real time. In addition, the crew gave the actors arm extensions.

“It’s always interesting when actors start working with performance capture for the first time,” Letteri says. “When they see themselves as other characters and inhabit those characters, you can see them play with subtle motion to make the characters look real, especially for the action scenes. Their first thought is to be quick and safe, and then they remember how a chimp would move. We made sure they had the space to do a lot of rehearsal.”

Even so, there were scale challenges on set. When Serkis played Caesar as a toddler, for example, the crew put tape on his chest to give actor James Franco the proper eye line. “It was a challenge for them,” Lemmon says. “If the actors touched an ape actor, it might give away the size.”

Previsualizing the Action

Previsualization supervisor Duane Floch of Pixel Liberation Front ran herd on previs and postvis efforts for Rise of the Planet of the Apes, joining the effort in summer 2010. “When I came on, there were six companies doing previs,” he says. “They had been up and running for six weeks prior. It was getting to the point where Kurt Williams, the co-producer, had so much to look at and go through to strain, he asked me to take charge of the first pass. We’ve worked together on a number of films and think alike.”

The volume was so huge because every shot with an ape had to be previs’d, and each previs went through multiple iterations. “I could get the previs to a certain point, show it to Kurt, disperse notes, and answer questions,” Floch says. “Once shooting started, we had to keep ahead of that giant rolling ball that was the shooting schedule.”

Previs artists worked from storyboards to block in humans, apes, and cameras for approximately 1500 shots. Previs gave the director a sense of timing, the first look at how the story moved through several cuts with characters performing, as seen through a camera. To do this, the artists needed to animate humans and apes.

“We had a Disney animator on board who created a lot of chimp reference materials,” Floch says, “idle monkey cycles, apes sitting around scratching, walking, and running. But, we also had a lot of story points that we had to convey with gestures. The characters don’t talk. So the
rigs had a good amount of facial expression control.”

For action sequences, though, the crews eliminated the rigs.“Some scenes had 150 apes and more,” Floch says. “To handle that amount of data, we transferred the animation onto meshes, the geometry itself, and got rid of rigs altogether, which made the characters light. The running apes just ran with pre-animated motion, but if they had rigs, they would have been unworkable.”

Previs worked to get the intent of a sequence nailed down and approved, but things always change during filming. “They had Andy Serkis, Terry Notary, and other actors from the motion-capture team dressed in gray suits with cameras on their heads,” Floch says. “We referred to them as the
grapes, the gray apes. They shot plates with and without the performance-capture actors, so sometimes the cuts selected were blank plates and sometimes they had grapes. Because it’s difficult for an editor to work with footage that doesn’t have any action, we’d add apes in postvis to inform the cut and the visual effects teams.”

For the postvis, the artists needed to track cameras, a skill not necessary for previs.

“We don’t have a tracking department with artists who only track cameras,” Floch says. “But, everyone on our team knew how to get a good 3D track, so they could fill the plates with animation. We had saved a lot of cycles in our library, but when nothing matched what they wanted, the artists started keyframing from scratch.”

Floch enjoys both sides of the process—previs and postvis. “It’s great fun helping create the initial pass in previs because it’s everyone’s first look,” he says. “It’s collaborative.And then, once you’re working in plates, it’s fun to put the characters in and to make footage work when it wasn’t shot that way.” –Barbara Robertson

Talk to the Animals

The data captured from the actors’ performances moved from the stage to a department of motion editors who translated that data into curves for an animation system, using techniques originally developed for King Kong and refined since. The goal was to move data representing the captured motion of an actor’s skin onto CG muscles beneath the digital face.Thus, when an animator moved a muscle in the animation system to create an expression, the digital skin behaved appropriately.

Dedicated actors played all the principal characters, and to aid with the data translation
for those actors’ performances, the crew at Weta had “pre-calibrated” each actor.

“Our facial system is built on Paul Ekman’s FACS system,” Lemmon says. “We had them go through a standard battery of FACS expressions and a range of movement for their body. We filmed them with witness cameras and digital SLRs. For the hero actors, we also did a high-resolution motion-capture session using tiny capture markers on their faces.”

Ultimately, though, the data would need to drive simian characters, not humans. “The biggest difference was in the design of the faces,” Lemmon says. “The Na’vi in Avatar mimicked the actor’s features. And, the art directors redesigned Gollum’s face to look more like Andy Serkis, which helped give us solid anchors for markers. It is more difficult with apes. There isn’t a one-to-one correspondence. We had to look at the expression Andy made and translate it into Caesar’s anatomy.”

Before creating Caesar and the other apes, the crew did extensive research into ape physiology, muscles, facial structure, and body language.“As you can imagine, in addition to the differences in anatomy, there are also differences in the way chimps use facial expressions,” Lemmon says. “When a chimp smiles, it can mean it is scared or it is trying to be aggressive.”

Even though the film is an origin story, and the animators and artists tried to be as faithful to modern-day chimpanzees, orangutans, and other apes as possible, they knew that movie audiences would not understand that a chimp’s smile doesn’t mean it’s happy.

“We looked at reality as a reference point as much as possible,” Lemmon says. “But when it came to Caesar’s facial performance, we wanted Andy [Serkis’] performance to come through and be readable. So, we set up a system that translated the movements of Andy’s facial markers into muscle firings that we translated into something slightly different, but with the character of Andy’s performance, for the ape. But this happens when we translate performances onto humans, as well. We need a lot of human intervention to get the result to look as close as possible to the facial reference we get from the witness cameras.”


Actors wore arm extensions on set, and although the LED markers didn’t extend the full length, the data translation system took the “crutches” into account.

Big Steps


To help with the facial and performance animation, Weta Digital researchers created a new muscle system. “We have a dynamic solve that goes on top of the animation layer,” Letteri says. “It adds ballistics to the face, and smoothes the muscle and skin movements across the face, distributing them properly.”

For the bodies, a character-mapping process scaled the data captured from the human actors to the size of the ape they play, and adapted the proportions appropriately for the apes’ longer arms and shorter legs. The arm extensions, which were foot-long little crutches, helped. Even though LEDs didn’t extend into the crutches, the system knew how far the crutches extended beyond the hand.

“It’s procedural, but not automatic,” Lemmon explains. “We had to adapt and adjust it depending on what the actor was doing, whether he was standing, sitting, or on all fours pushing with his knuckles and legs.” Animators then refined the performance using keyframe animation.

“We still have lots of keyframe animation, even with the performance capture,” Letteri says. “You can work out the best translation you can think of through the software, and still someone has to make adjustments to the leg, heel, and knee to get the weight right.”

Animators also performed many of the background characters, and for shots that the human actors couldn’t perform, animators keyframed those characters as well. “People can’t climb a tree like an ape, no matter how hard they try,” Letteri says.

As with most new techniques and technology, Weta Digital adapted previous systems to create the apes’ performances for this film. However sometimes, evolutions are revolutionary. It wasn’t important to capture actors playing Avatar’s Na’vi outside in the real world because they lived on another planet that was a digital environment, but the apes in Rise of the Planet of the Apes needed to be in the real world.

When King Kong was in the real world, Serkis was off to the side in a scissor lift to give the actors a correct eye line. The motion capture happened later, on a separate stage, where Serkis performed to the live-action footage. “We could have done that for this film because Andy is so great,” Letteri says. “But, we didn’t want the actors in this film to act to tennis balls.”

Letteri believes the solution they devised—a flexible, portable adaptation of the performance-capture system, which can capture actors’ facial and body movements in daylight, on location—is a big filmmaking breakthrough.

“The whole point is to make the visual effects part of the filmmaking process,” Letteri says. “That’s what Jim [Cameron] was after with Avatar. Breaking down the barrier between live action and digital filmmaking. It didn’t make sense for Avatar, but it did for this film. This is the last step.”

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.