Go Large
Issue: Volume 36 Issue 5: (July/August 2013)

Go Large

The visual effects studio Industrial Light & Magic is famous for its giant robots, but nothing, not even the Devastator in Transformers: Revenge of the Fallen or Colossus in Transformers: Dark of the Moon compare with the Jaegers and Kaiju in Warner Bros.’ eagerly anticipated Pacific Rim. Directed by Guillermo del Toro, the sci-fi monster movie, set in the near future, pits machines called Jaegers against invading alien monsters called Kaiju, who rise from a crevasse in the Pacific Ocean. Robot-like in appearance, the Jaegers have two legs, two arms, a head, and a power generator. They take orders from two human pilots who ride inside their heads.

“Giant” scarcely describes the robots. Gypsy Danger, the hero Jaeger who appears throughout the film, is a 250-foot-tall walking battleship and power plant; a 25-story building that moves through the ocean and through cities. And Gypsy is but one of five main robots, the legendary and, perhaps, obsolete one.

The Godzilla-like aliens they fight were even larger. Gypsy Danger’s main opponent, Knifehead, stands 250 feet tall, but nose to tail is approximately 400 feet, and Slattern is 900 feet long. By comparison, King Kong is 25 feet tall. The Transformers films’ lovable Bumblee robot is no more than 15 feet tall.

Thus, “giant” scarcely describes the amount of work the ILM artists needed to do, as might be expected. The numbers tell the story in part: There were visual effects in approximately 60 percent of the film’s running time. Of the 1,550 visual effects shots that made it into the film, 600 were all-CG shots with the giant robots and/or aliens. Of those, 245 had robots and aliens fighting in digital water. And, whether the CG characters were in simulated seas or on land, they often had simulated rain pouring down on their heads and cascading over their bodies as they moved.

INSIDE THE immense CG robots’ heads are two human pilots.

“Scale was an ongoing challenge,” says John Knoll, creative director at ILM and visual effects supervisor on Pacific Rim. Working with Knoll were ILM’s Visual Effects Supervisors Lindy DeQuattro and Eddie Pasquarello.

“The ratio between a six-foot human and Gypsy Danger is a factor of about 40,” Knoll says. “So, how fast should something like that move? If you are going to shoot humans and have something 250 feet tall move with the same physics, you’d have to shoot at 150 to 160 frames per second. No one wanted to have [the creatures] be slow and boring. But if you cheat the scale and have the characters move faster than what physics would dictate, then how does the water behave when the characters slosh around?”

And, when the characters are on land and thrash through Hong Kong? “Same thing,” Knoll says. “When they smash through buildings, we want things breaking and falling, and the dust swirling, to have good physics. We had to ensure the simulations didn’t explode.”

A JAEGER FACTORY called the “Shatterdome” that surrounds the actors is all-CG.

New Tools, New Workflow

Pacific Rim was a big technology experi-ment largely centered around improving the efficiency of the pipeline,” says John Knoll, creative director at ILM and visual effects supervisor on this film. “In terms of cool R&D and new technology, that was relatively limited. It was more about trying to work out a way to deliver the same quality of work we’re known for at a lower price.”

Autodesk’s Maya is still the studio’s primary animation tool. The new tools included The Foundry’s Katana, Solid Angle’s Arnold, and Side Effects Soft-ware’s Houdini.

“On Mission Impossible, I did one sequence with Katana and Arnold,” Knoll says, “and I wanted to see what it would be like on a bigger project, so I said, ‘Let’s do the whole thing that way.’ Katana was a very positive experience. It’s still early in its product life cycle, and we’re happy to work with The Foundry on it.”

As for Arnold: “It was a mixed experience, but generally I like it,” Knoll says. “I like the directness of raytracers, the truthfulness of them, and how little cheating you do. But, the higher render times are definitely a challenge. Every renderer makes tradeoffs for what’s important to them. [Pixar’s] RenderMan historically places computational efficiency higher than the workload of the artists setting things up. Raytracers are simple for artists to set up, but you live with higher render times. This was our first big show with Arnold, and we were less experienced with which knobs to turn, so we got help from Solid Angle. We spent a lot of time learning how to optimize scenes.”

The simulation artists used Houdini primarily for destruction and a little bit for shots with rain. “We did a lot of simulation work in Houdini,” Knoll says. “Some people liked it. Some people prefer the in-house tools.”

Although the idea of changing pipe-line tools and becoming more efficient all at the same time might seem contradictory, Knoll is pleased with the effort. “In the end,” he says, “I think we did achieve the lower numbers despite the chaos and uncertainty from the big pipeline rework.” – Barbara Robertson

Mech Porn

Model Supervisor Dave Fogler led a crew of 30 artists at ILM in San Francisco and Singapore who worked for a year and a half building and texturing the big ’bots using Autodesk’s Maya, Pixologic’s ZBrush, and The Foundry’s Mari, along with proprietary tools.

The modelers began creating Gypsy Danger by working from 2D art and a clay sculpture. “The art and sculptures had competing designs,” Fogler says. “They had nothing to do with each other. Guillermo [del Toro] would say, ‘I like this here and that there. Let’s put together a Gypsy Danger we like.’ I could talk for hours about how great he is to work with. His eye is good. He’s immensely reasonable and intelligent. He invites everyone’s contribution and talent.”

During the film, the story calls for Gypsy to have two upgrades. “In addition to that, she’s in battle after battle,” DeQuattro says. “Every time something explodes against her or she’s kicked, we needed a new map.”

Two artists, and sometimes three, worked on the massive robot for more than a year. “Sometimes even four artists,” Fogler says. “And they were our best artists. She [the crew extended the battleship analogy with the pronoun] is damaged in six or seven different fights, so we made 21 different versions. Gypsy with her arm torn off. Gypsy with acid burns on her arms, and so forth. A lot of the effort went into texturing.”

Her base model has 2,000 parts – a big number but less than that of a Transformer robot, even though she’s much larger. “I’ll tell you why,” Fogler says. “We put extra time into creating efficient assets. We did a lot of deep thinking about how to create a giant robot that accomplishes complexity without overburdening the system.”

But, her base model doesn’t tell the whole story. “Guillermo bubbled over at the idea we could see functioning parts at the shoulder and knee,” Fogler says. “So the route we took was to make the machine actually work.”

When the audience sees Gypsy Danger lift her arm above her head, the hydraulics function in the rig. Same with her knee. “When you have a knee the size of a bus that actually cogs around and functions, it’s a challenge,” Fogler says. “Modelers, riggers, and animators were often in a painful and drawn-out loop. But we didn’t have much after-the-fact fudging.”

To help create a design that worked for everyone, the crew sometimes went outside the typical postproduction paradigm in which drawings go to modelers, then to riggers, then to animators.

“With mechanical characters, so many things can go wrong; we needed someone in the middle who could mock up a design quickly,” says Hal Hickel, animation supervisor. “Gypsy’s shoulders were a particular nightmare. The parts had to fit in a plausible way, and we couldn’t have pieces crashing into each other when she moves an arm overhead. Chris Mitchell, an animator on the show, was a natural at that. He is pretty good at rigging, can do some basic modeling, and has a great brain for machinery, so he would prototype things. Put a drivetrain here. A piston there.”

Mitchell’s prototype went to Fogler’s crew, which made the final model with proper UVs for surfacing, and from there, to rigging for character TDs to install the sophisticated animation controls.

“Guillermo appreciated that we took time to have our mechanics make sense within reason,” Fogler says. “Pistons function and push things. Cogs fit into other cogs. He calls it 'mech porn,' and he embraced it. In the opening of the film, there’s a mech porn sequence. Gypsy’s head travels down a shaft, and then her arms pull it back on. Cogs cog. Things steam. That comes from the tradition of Kaiju films, which always have a sequence where, for no good reason, a robot’s head comes off, the head goes down an elevator, and then joins back on the body.”

Thus, although Gypsy’s base model has 2,000 parts, the modelers also created a high-detail Gypsy with 20,000 parts. “We could flip a switch, turn on the high detail, and the lights would dim,” Fogler laughs. “But, when a shot needed the detail, it would be there.”

In addition, textures added to the visual complexity without increasing the number of parts. “Anything that didn’t move was a texture,” Fogler says. “We had to do a texture optimization, though, because the renderer we used, [Solid Angle’s] Arnold, doesn’t love displacement.”

With 1,550 shots and characters at this scale, efficiency was important. “A lot of what we did was not glamorous,” DeQuattro says. “It was making our pipeline and assets as efficient as we could to get these things rendered. We had big characters move through bigger cities. In Hong Kong, we created three or four neighborhoods entirely, so we found ways to build the city without modeling each building as a hero. And then, we destroyed the whole city. If they moved a shot with Gypsy from one point in a battle to another, we’d have to re-render because she would have a different map.”

“I have a crazy statistic,” DeQuattro says. “If we took all the jobs for the film and ran them on one processor, it would have taken 7,000 years to finish, and that’s with all our efficiencies. At the height of the show, we were using 600tb of data.”

To keep track of all Gypsy’s parts and texture maps in the various levels of detail and damage, the crew used a tagging system. “We didn’t carry all the geometry around in the base model,” Fogler says. “A wire frayed in five shots didn’t sit in the asset for everyone to lug around for the rest of the movie. So, if you were an animator or a TD running a shot, you might want a C1 Gypsy with a B2a arm, and A2 detail.”

Fogler created the tags, wrote them on a whiteboard, and documented them elsewhere. “People would run into my office, look at the whiteboard, and run back,” Fogler says. “We try to be smart, and obviously our systems are smart and complex, but at the end of the day, sometimes what you have is a collection of artists who brute-force their work through.”

The other characters made fewer demands on the team, although their designs were as popular with the modelers and texture artists. Each Jaeger’s design reflected the home country of its pilots. “All the characters in the film, the Jaegers and the Kaiju, have design surprises,” Fogler says. “The Russian Jaeger has pistons for hands. The main Kaiju lifts its arms and wings fold out. The Australian Jaeger is newest, so its panels are more futuristic and it doesn’t have rivets.

Weighty Moves

Performing the monstrous machines and the gigantic aliens took three teams of ILM animators – 25 in San Francisco, seven in Vancouver, and 11 in Singapore. “I had many of the same people who worked with me on Rango,” Hickel said, “so I knew them well. We were like a 24-hour studio. We’d do [Cospective’s] CineSyncs in San Francisco, go to bed, and the next morning we’d have a bunch of work to review from Singapore.”

Hickel began animating Gypsy Danger and Knifehead, the main alien, before the studio had officially started work on the project. “We didn’t use the assets in the film, but the designs were close enough that, even later, I could try things out with those two,” he says. “I did a series of shots of Gypsy walking through a city of tall buildings, knowing there would be action in Hong Kong. I wanted to know how she would move, how quickly, and how we could give a sense of scale.” Because no street in Hong Kong was wide enough to accommodate the giant machines, the team at ILM later carved a wide avenue with a parkway through the middle of the digital Hong Kong that they created.

Big in itself wasn’t Hickel’s most difficult problem, though. “Big things need to move slowly or apparently slowly,” he says. “The problem was how to keep the action exciting while keeping the characters looking big when they are surrounded by realistic, physically based simulations.”

Excitement happened through cheating the physically accurate slowness with fast cuts and with camera angles. “We’d mount the camera on Jaeger’s fist as it swings toward a creature,” Hickel says. “In a wide shot, that might look like slow motion, but the fist is traveling 120 mph as it hurtles toward the face of a Kaiju.”

Fitting the characters into the physically based simulations came down to what Hickel calls a “million little decisions.” “The water had to look realistic, so if we had a creature moving through the water, we had to be mindful of the simulation,”

Hickel says. “If an animator had a foot moving 900 mph to make a battle exciting, the water would explode.”

Most of the battle scenes were 100 percent digital, including all the battles that took place in the ocean. The work began with story­boards created in del Toro’s production company. Then, he would shoot the actors playing the pilots inside the heads of the Jaegers.


To combat the invasion of gigantic Godzilla-like aliens, countries around the world banded together to build the enormous Jaeger robot-like machines inside hangers called Shatterdomes. The original Shatterdome, set in Alaska, could house one or two Jaegers. The main Shatterdome in the film, located in Hong Kong, fit eight of the 25-story-tall Jaegers. The octagonal structure is approximately 1,000 feet wide and 450 feet tall. It sits 43 levels underground with a door at sea level and a landing pad 42 stories above sea level.

A crew of 100 artists at BaseFX in Beijing, led by Industrial Light & Magic’s Visual Effects Supervisor Ed-die Pasquarello, created 350 shots in the Shatterdomes using a 3ds Max (Autodesk) and V-Ray (Chaos Group) pipeline. “ILM provided all the assets,” Pasquarello says. “The Shatterdome, the Jaegers. The artists at Base lit them, dressed the sets, and added the extras.”

The production crew had filmed actors in a partial set for the artists to extend. “A lot of times, we had only a floor and a greenscreen surrounding the actors,” Pasquarello says. “In other sequences, we had a control room in the foreground, and we built every-thing you can see through the window. Each of the eight Jaegers had its own bay in the octagonal space, and the sets gave us partial cues for the bays, but we didn’t have enough to match. We had to create the bays.”

For reference, Pasquarello looked at space shuttle documentaries. “We needed to see something with a large, vertical ship, and we found pictures of the space shuttle with scaffolding and tons of workers,” he says. “We wanted a very active site.”

Thus, the Base artists added vehicles, cranes, and other equipment to the sets, as well as scaffolding with welders working on the Jaegers. “We had a catalog of 50 Shatterdome props that we could apply to a Jaeger or the dome,” Pasquarello says. “The shots were very complex; there were so many moving pieces.”

For Pasquarello, the challenge was to fit these 350 shots seamlessly into the rest of the film, even though he was miles away from Visual Effects Supervi-sor John Knoll and Director Guillermo del Toro. He would do CineSync sessions with both, and additionally, joined in on the dailies whenever he could to stay aware of the other shots.

“The grunginess of the Shatterdomes came through Guillermo’s direction. The tone of the place,” Pasquarello says. “At first, it was too clean, so we had to make it feel more like a working place that would get dirty. Guillermo is amazing. He can break apart the frame in a way that changes the mood from one area to another. Sometimes, it’s just the way sections are lit. You know that if you worked in that area, you would deepen the mood. By breaking up the areas, we could feel the space. Although ILM provided the assets, we had the opportunity to help develop the look.”

Working at arm’s length from the studio in San Francisco had some ad-vantages. “We were this little offshoot project,” Pasquarello says. “We weren’t out of sight, out of mind, but it was kind of what I wanted. Our challenge was to contribute to the film without diverting John’s attention; to complement the film without burdening the ILM crew, which was taxed already. And the cool thing about the team in China is that they rose to the occasion. They were really dedicated. They did great.” – Barbara Robertson

On set, the actors sat in motion-controlled bases. “We’d get a live-action shot of the interior of the head and ‘board-a-matics’ of everything else,” Hickel says. “That’s when the shot became CG. We’d give the storyboards to our layout department and the process became like the layout process on an animated feature.”

When the layout department had added a camera move, the shot moved on to the animators. “Guillermo likes to keep the camera moving and active, so there was a lot of discussion as our camera guys got to know his style,” Hickel says.

For the ocean battles, the animators received a proxy version of the ocean surface. “If the characters were up to their waist in water, I never saw the feet,” Hickel says. “I’d assume they were correct. But occasionally, I’d get a call from the fluid-simulation department saying the water was freaking out, and we’d have to go back and fix the animation.”

GYPSY DANGER stands approximately 25 stories tall.

Water Fights

CG Supervisor Ryan Hopkins oversaw the fluid simulations. “We had probably 245 shots that had surface water or in which the characters were underwater,” he says. “And almost the entire movie takes place in the rain.”

ILM’s simulation artists use proprietary tools within the studio’s Zeno system to move oceans, waves, and other types of water. “We started with the tools we had created for Battleship,” Hopkins says (see “Water World,” April/May 2012). “And then we looked at the challenges for this show. Battleship mixed live-action footage and the dynamics of ships in the water. We had creatures fighting. It was of utmost importance to sell their scale.”

Hopkins met with del Toro early in the process to discuss the challenges of keeping the water physics real with 250-foot creatures fighting in the digital water. “If we were totally accurate,” Hopkins says, “water displacement would cover all the action. His instruction to us was, ‘Beautiful first. Accurate second.’ That was a great challenge. It was scary at first. But, it also gave us creative freedom. It became one of the best projects I’ve ever worked on. Guillermo was such a great director. He knew what he wanted, but he kept the work collaborative. And, he’s a great, funny guy.”

For reference, the simulation artists looked at videos of calving glaciers breaking into the water, container ships launched sideways that slammed into the water, and dam-busting bombs from World War II. “They are big, spherical bombs that go into the water at 150 miles per hour,” Hopkins says. “You see the water burst into a pyroclastic mist. Usually, CG water looks like it has a stringy surface tension. But, when you look at large-scale water, you don’t see that. Surface tension holds water together at a small scale, but if you watch a glacier break off and fall into the water, the water goes through the air so fast it instantly vaporizes into a mist. It doesn’t hold together. That was our goal.”

Hopkins and the team also realized that they would need to create shot-specific simulations. “If you have a movie with similar effects among many shots, you can create an asset, plug it in, and tweak it,” Hopkins says. “On this show, even if you have the same creatures fighting, they would be doing different fight moves in

every shot. They run through the water and slam an opponent’s body into the water. We couldn’t create one asset that would quickly make every shot look good.”

Instead, the team created a low-resolution base fluid simulation and layered higher-resolution simulations exactly where needed for each shot. In one sequence, for example, the harbor water is flat, so areas outside a fight zone could have a lower-resolution simulation and areas around the creatures higher-resolution simulations. “That allowed fast turnaround,” Hopkins says. “We could get a buy off on how the creatures disrupted the surface. Then, we could work on how the white water would look and how much it would cover the creatures.”

Velocity-damping controls helped the artists affect the speed of the splashes, and that, in turn, helped keep the water moving at a rate that was in scale with the monsters. “If we did a simulation without the damping controls, it looked like two people in rubber suits splashing around,” Hopkins says. “The speed of the water helped sell the scale of the creatures.”

To control how much water could cover the creatures, the artists used a procedural technique to specify the depth of collisions. “Guillermo put the camera low to the water, as if it were on a boat,” Hopkins says, “Most of the time the camera looks up. So, we have a camera five feet above the water when a 250-foot creature falls into the water 50 feet away.”

If they had simulated the water dynamics with real physics, the collision of monster and water would have created massive bulges, massive waves, that would have covered the action. “We found that if we sliced off the collision at 10 or 15 feet into the water, we would get a nice bulge that we could control. That was the key to balancing the scale.”

MANY OF THE BATTLES between the Jaegers and aliens, such as this, happen in digital oceans and rain.

Water Works

Rain falls onto these massive characters and cascades down their surfaces – two separate problems.

“Because of the scale of the shots, it’s hard to realize how fast the camera moves through the air,” Hopkins says. “If the creatures were people, you would have the camera on a 10-foot dolly, rolling slowly, and the falling rain would look vertical. But to get the same relative motion of the camera at this scale, the dolly would be 500 feet and the rain would whip sideways because the camera would be moving 50 miles per hour. So we had controls that the artists could use to compensate for the camera velocity.” Thus, to create shots in the rain, not in a hurricane, the artists could choose how fast the water particles fell.

The rain cascading down the creatures’ surfaces presented another problem. For reference, Hopkins looked at waterfalls pouring down high cliffs with wind blowing on them. “The moment the water falls, it’s aerated by the wind,” he says. “I told the artists, no meshing of the water. No surfaces in the cascades. If the creatures had water blobs coming off them from meshing, they would look like people in suits. I’d say, ‘Imagine you threw a bucket of water off a 25-story building. You’d see it vaporize.’”

To make the cascading rainwater look interesting, the artists used air displacement caused by the creatures’ movement. The same idea worked for white water in the water fights, as well.

“The creatures in the fights create their own wind,” Hopkins says. He likens it to the blast of air someone standing near a freeway would feel when a big semitrailer truck zooms by. “Now, imagine this massive monster,” he says. “Water doesn’t just fall with gravity behind it. You have air displacement. Air fields are what helped make the white water simulations look real. This was such a great challenge to take on. It really pushed us.”

Every time we think we’ve seen the biggest effects ever, a new film comes along that pushes the art of visual effects larger. The challenges ILM had with scale seem obvious at first glance, but when we dive deep into the ramifications, the work of the artists who met that challenge becomes even more impressive.

Barbara Robertson is an award-winning writer and a contributing editor for CGW. She can be reached at BarbaraRR@comcast.net.