Yo Ho Ho!
By Barbara Robertson
Issue: Volume: 29 Issue: 7 (July 2006)

Yo Ho Ho!

After you’ve seen Davy Jones’ beard writhe like a sea creature. After you learn that Industrial Light & Magic discovered a way to motion-capture the performances of multiple actors during principal photography without using one piece of motion-capture equipment. After all that sinks in, it’s the details in Walt Disney Pictures’ Pirates of the Caribbean: Dead Man’s Chest that will amaze you.

Gore Verbinski returned to direct this sequel to the popular action/adventure/ comedy Pirates of the Caribbean: The Curse of the Black Pearl, as did actors Johnny Depp, Keira Knightley, and Orlando Bloom, visual effects supervisor John Knoll, and ILM. The sequel has 1100 visual effects shots, of which 600 are animation shots. Digital pirate Davy Jones stars in 200 shots; he’s on screen for 15 minutes of the film. “It was a lot to push through this facility in a short period of time,” says Knoll, who rolled onto the show immediately after finishing last summer’s Star Wars: Episode III—Revenge of the Sith. ILM handled all the shots except approximately 60 composites that the facility farmed out, under Roger Guyett’s supervision, to Tippett Studio and Evil Eye Pictures.

Knoll divides the work into three categories: Davy Jones and his crew, Kraken, and shots that establish time and place. Davy Jones and all but one of his crew are CG creatures played by actors whose performances were captured on location: ILM recorded motion data during principal photography in a variety of lighting conditions and environments without restriction. Kraken is a giant sea monster performed by animators using a sophisticated new rigging system. The “time and place” shots included turning Dominica, a lush island in the Caribbean, into Cannibal Island with fully CG environments. To do these shots, Steve Sullivan, director of R&D at ILM, cites three areas in which the film pushed technology at ILM: simulation, rendering, and motion capture.

“This was a very simulation-heavy show,” Sullivan says. “You’ll see that in Davy Jones and other characters. The rendering moved forward, not just in the aspects of the look, but also in making the characters tractable to render. And, the new motion-capture system made a big difference.”

Cod Is in the Details

Thirteen modelers led by creature supervisor Geoff Campbell sculpted the fishy characters. Gentle Giant’s body scans and face scans from ILM’s own Clonecam system—a photographic technique that provides 3D geometry and textures—gave modelers the actors’ proportions, but they built the CG characters entirely from concept art. Art director Aaron McBride drew many of the cursed, crustacean-encrusted sea phantoms based on sketches by Mark “Crash” McCreery, and added three of his own mutants—Angler, Wheelback, and Ratlin. “We have 18 hero characters that hold up close to the camera and 32 variations,” says McBride, who followed the artistic development of the characters through production. “The only non-digital character is Bootstrap Bill.”

Working in Autodesk’s Maya, the modelers spent close to 10 months of preproduction time building polygonal models of the characters’ bodies and, working in ILM’s proprietary software Zeno, creating the face shapes. Then, many of them carried on for the next eight months sculpting face shapes during production, as animators needed them. “We moved in a whole new direction,” says Campbell . “We went from B-spline models, which we’ve been using since Terminator, to subdivision surfaces. And, we used Z-brush for displacements. We couldn’t have created these characters in any other way because of their complexity.” Pixologic’s Z-brush helped the modelers cover the surfaces with barnacles, mussels, coral, and other briny details.

“Pat Meyers, one of our TDs, created a sea-life picker,” says David Meny, digital production supervisor. “We could pick one creature and instance it on the surface and adjust the scale. The Z-brushed components added organic variety to the whole. And then we also had painted textures— the painters got the Z-brush maps.”

The creatures had so much detail that sometimes there were more vertices, more geometry to render, than there were pixels available. “It meant that anything that was raytraced, occlusion passes, or even subsurface scattering, was much slower because the raytracer ran for all the geometry, all the CVs inside a pixel,” says Christophe Hery, lead R&D engineer. “It summed everything up for one pixel at the end, but we were wasting a lot of time for not a lot of result.”

Because ILM decided to use Render- Man for the entire film, Hery contacted Pixar. An exchange of ideas resulted in Pixar adding a solution derived from a game technique documented in Nvidia’s GPU Gems 2 to RenderMan. The technique, which approximates raytracing, cut the render times by around 75 percent—from 12 to 13 hours to three or four.


Meanwhile, ILM crewmembers working on set with Verbinski began acquiring data. They built digital representations of the environment so that later they could easily lock the digital characters to scans of the live-action plates. And, they captured the performances of the 15 actors playing the hero characters. Later, this motion-captured data would drive the character animation for the creatures that would replace the actors in the live-action plates.

Jason Snell, layout supervisor for the show and matchmove supervisor, built the digital representation of the set and location environments using photographs. “My job was to collect data for where the creatures were in the environment,” he says. “Most data collectors make diagrams and take measurements. At ILM, we build the environments on a laptop, while we’re on location.” To do this, Snell took photographs with a digital still camera of the entire environment from multiple angles and matchmoved each frame. Then, he looked for common points in several pictures. “If I have three pictures of a house and find one point in common, the system can triangulate those points and find a point in space,” he explains. By using multiple points in multiple images and Zeno’s tracking system, Snell could build a CG environment from the triangulated views. Then, later, once he had plate photography, he could lay those images on the 3D world that he built to create a photorealistic environment.

ILM captured data from actors in gray costumes during principal
photography (at left, below), applied the data to “manikins” (middle, below),
and then fitted the manikins to specified creatures (at right, below)
to create a final performance (top).

John Knoll supervised the motion capture on location. On the first Pirates, the skeletal CG pirates also appeared as real people who were filmed in costume. To substitute the skeletal people, the crew used “matchimation”: Artists lined up a CG model with an actor’s image on the scanned film plate and copied the actor’s motions onto the 3D model. The same technique helped ILM blend a CG terminator into half of Arnold Schwarzenegger’s body in Terminator 3. But for the second Pirates film, animation director Hal Hickel, who had also supervised animation for Black Pearl, asked for a way to make the matchimation artists’ lives easier. “He asked if we could put some kind of tracking lights on the actors,” Knoll says.

Knoll never considered setting up a traditional motion-capture system on location in the Caribbean—not with 15 actors to capture, sometimes working in 2 feet of water, and not without completely disrupting principal photography, which could not happen. And yet, he wanted to gather the same high-quality data produced on a motion-capture stage. “That was the challenge we threw down to R&D, and they came up with a technique we call Imocap,” he says. The “I” stands for “image.”

On set, actors playing Davy’s crew wore gray suits provided by the costume department with tracking balls and bands from ILM. ILM had printed the bands with black and white dots in black and white squares, and positioned them at the joints—knees, ankles, wrists, waist, and so forth. “The dots help us see how the suit moves,” says Kevin Wooley, motion-capture engineer. “We’re not tracking dots in space.”

Because ILM’s character animation pipeline works with joints organized into skeletal animation rigs, Imocap puts the captured data onto skeletons. To an animator, the result looks the same as if a motion-capture crew had acquired the performance using a traditional system.

“We calibrated an animation rig (skeleton),” says Michael Sanders, digital supervisor in charge of data acquisition. Then, to calculate the motion for the skeleton, the Imocap system inferred the skeleton’s movement from the image of the body. “The skeleton and the images lock together,” he says, “and we get a per-frame motion solution from a sequence of images. It’s like the inverse of a motion-capture system, but with the same fidelity. Normal mocap is constrained by hardware, controlled lighting, and a calibrated environment. The inverse is no hardware constraints, no calibrated equipment, and full freedom to shoot in any location, under any lighting conditions, and in any environment.”

Hadras, performed by Ho Kwan Tse, loses his head in a chase scene. Modelers sculpted the
creatures in Maya, created face shapes in Zeno, and added such detail as barnacles in Z-brush.
To help capture those images, while Verbinsky directed and his camera operators filmed the dot-banded, gray-suited actors, Knoll and the ILM crew also shot the action using prosumer HD camcorders for reference. The extra coverage helped verify the 3D performance for Imocap; however, the technique worked even when all they had was the film image. “The technique, by design, had to be very forgiving,” says Knoll.

Once ILM had the footage, Snell oversaw a group of 16 people who matched the camera from the principal photography to the digital environment he had created from photos on set. Then, using the Imocap solver, they retargeted the performance of each actor onto his corresponding CG creature. It sounds a little like matchimation, but it’s leaps and bounds beyond. “I think of Imocap as a blend between match animation and motion capture that produces motion-capture data, a hybrid of techniques,” says Snell. “It sets a new bar for visual effects. Wait ’til you see these guys.”

Sneering Tentacles

It may be revolutionary, but, like traditional motion capture, it wasn’t perfect—an actor might cover another’s face, for example. Animators cleaned up the Imocap movement, animated special appendages like crab claws sticking out of a character’s back, and gave the faces expressions. “We gave all the characters facial shapes, even Ogilvey, whose head is made of coral and looks like he doesn’t have a mouth, and Crash, who is so eaten away he has sea anemones for a mouth,” says Campbell. “The tricky thing was translating Bill Nighy’s emotion to Davy Jones’ character. He’s a lip-synched main character.”


Davy Jones’ performance by Bill Nighy wearing a gray suit with tracking markers and bands at
left, top, was captured using ILM’s proprietary Imocap system and then applied to the cursed
creature at right, top. Keyframe animators and simulation artists performed the tentacles.
A new articulating rigid-body system controlled internal motors that bent the organist’s 46
tentacles forward and backward in different dimensions and at various speeds.

A custom analysis tool called Compare helped animators with that process. “Animators can quickly bounce back and forth between Bill Nighy’s performance and the CG character,” says Knoll. “They could look at the timing of an eye blink, how high the corner of his mouth goes on a sneer. It was our job to copy Nighy, not interpret what he was doing.”

Because Davy has no nose, the modelers had to find ways for the animators to reproduce Nighy’s contemptuous sneers through his facial tentacles. Although animators created the sneers, usually Davy Jones’ 46 tentacles moved procedurally thanks to an evolution of ILM’s rigid-body dynamics system.

The complex detail in such all-CG characters as Davy Jones caused ILM’s R&D department to
derive a new rendering solution based on a game technique to save time.

Each tentacle was made of cylinders connected with joints—imagine soup cans one below the next. ILM’s previous rigid-body dynamics could spin the soup cans relative to one another and rotate them in various directions, which was fine for battle droids in Star Wars that crashed to the floor and landed like rag dolls. But Davy’s tentacles needed to bend, curl, sway, and writhe. To make that possible, the simulation team put motors between the joints and created an articulating rigid-body dynamics system.

“We can feed the motors mathematical expressions or key-frame motion, and they’ll try to move on their own,” says James Tooley, senior technical director (TD) and animation supervisor. “They seek out and try to perform, and we can change that performance. You’ll notice that in some shots, they swish back and forth, but when Davy Jones gets angry, the tentacles get angry.” They also added something they called “sticktion,” which caused the tentacles to stick a bit to Davy as they swished—like wet spaghetti on a leather jacket, as Knoll puts it—rather than slide.

Each of the 46 tentacles could have as many as 40 moving parts. Karin Derlich, lead creature TD, developed a “super controller” system to manage the complexity. With the super controller, TDs could adjust parameters to specify the speed that the motors moved the joints. To create the sine waves that caused a tentacle to curl like an octopus tentacle as it moved up, down, sideways, and twisted in three dimensions, they selected which motors would go forward and which would go backward.

Monster Effects

Kraken, the sea monster that Davy invokes, has enormously larger tentacles. We first see the squid-like creature’s giant tentacles rising from the sea to overtake the trading ship Edinburgh . “Most of the time the tentacles are fl ailing around causing destruction, picking up sailors, and smashing things,” says Hickel. Animators worked with main controllers spaced along the length of the tentacle, and with smaller subsets of controllers, to add wobbles and shocks without disturbing the larger animation. In some shots, they used shape animation for individual suckers on the underside of the tentacles to bring the suckers alive.

“We had 16 base suckers that animation TDs could instance with random variations,” says Meny. “They could change the look on a per-shot basis.”

The hardest compositing shots in the film were during Kraken destruction scenes, which included
CG water, real water and mist, real smoke, live-action actors, and the CG Kraken.
Creature modeler Frank Gravatt built the Kraken tentacles in three sizes: a small, noodly arm for snatching people, a medium-sized tentacle for wrapping the mast, and a crusher that smashed the ship. In addition, a full-body Kraken, with its gruesome maw, appears in a few shots. To animate the beast, ILM developed a new system. “Tentacles, snakes, ropes, things like that are, by their nature, very difficult to animate,” says Tooley, “especially when you need to pull on one end and have the whole thing follow.”

To solve that problem, Tooley’s team of 36 TDs developed a non-ballistic posing system on top of an inverse-kinematics control system. The result allowed animators to move the tentacles around as if they were pieces of rope. “We also added extra control systems to the underlying inverse-kinematics control structure,” Tooley says. “Animators could move any part of the tentacle where they wanted at any time. They could grow it, make it stretch, shrink it, or make it fatter, and, as it changed length, it also changed volume.”

On top of that, they sometimes added a flesh simulator, which also ran on Davy Jones’ tentacles and a few other faces, to add jiggle and to preserve the volume. “There’s a really cool shot I like of a tentacle moving past a cannon port,” Tooley says. “We used the flesh system, this tetrahedral volumetric system, so as the suction cups on the tentacle pass the cannon and the cannon port, you can see that they really get pushed around by this collision object and vibrate a little bit.”

Mayhem and Magic

Kraken’s attack causes some of the most difficult composites in a film filled with difficult composites. Compositing supervisor Eddie Pasquarello led a team that averaged around 65 compositors and peaked at 80. The compositors used ILM’s Comptime, Apple’s Shake, and ILM’s Sabre system, which is based on Autodesk Discreet’s Inferno.

“We had to integrate CG characters with live characters wearing makeup,” he says, referring to Davy and his crew, which includes Bootstrap Bill (Stellan Skarsgard), the only non-CG character. “And we had water integration. Ship rigging. Backgrounds behind. Bluescreen people. There were no easy shots.”

Animators could stretch or shrink the tentacle and move any part using anon-ballistic procedural
posing system on an inverse-kinematics control system
Kraken picks up sailors with one tentacle while using another to smash onto the deck, and a third to wrap around a mast and rip it out. The ships are sometimes CG, sometimes models, and sometimes combinations of the two. There’s smoke, flying debris, and water spray—all photographed on set because Knoll believes in having the environmental effects shot in-camera. “It just looks more real,” Knoll says. “But it makes these shots the most horrible compositing nightmare you could imagine.” Rather than trying to extract smoke and water splashes from the plates, compositors laid the tentacles on top of the plate atmosphere and then layered smoke or water elements on the tentacles. A 100-layer shot, of a tentacle crashing through the middle of a ship, took two and a half months to composite.

Technical directors added detail in some shots to other wise instanced tentacle suckers
with random variations.

Compositors also handled Turkish prison digimattes, a bone cage filled with Black Pearl crewmembers that dangled from simulated CG ropes across a canyon, backgrounds that turned Dominica into Cannibal Island , and many other shots.

Given the technical challenges, technical firsts, and sophisticated artistry, the work on this film should easily put the crew in the running for a visual effects Oscar. “As much as I know people like to mention technical things, what I was most pleased about was the quality of artistic talent on this show,” says Campbell .

The first Pirates captured a box-office treasure chest of $653 million as it sailed into the top 25 all-time box-office hits domestically and internationally. With Johnny Depp, several totally gross characters, and the same outrageous humor as in the original, the sequel is likely to capture enthusiastic audiences as well.

“It’s pretty neat and kind of disgusting, but not so much you can’t enjoy the fun of it,” says Hickel.

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.